LEVEL: INTERMEDIATE
Free Trial.
Large language models security
The rise of large language models (LLMs) has not only transformed how we build and interact with AI systems but has also introduced new and complex security challenges.
As LLMs become increasingly integrated into real-world applications, understanding how they can be attacked, exploited, and protected is no longer optional — it’s essential.
hrough a mix of engaging video lectures, hands-on labs and interactive checkpoints, you’ll dive into how LLMs can be exploited using techniques like jailbreaks, prompt injections and more. You’ll also gain practical skills in defending against these threats at the model, prompt, system and service levels, using structured frameworks to assess and strengthen LLM security.
hrough a mix of engaging video lectures, hands-on labs and interactive checkpoints, you’ll dive into how LLMs can be exploited using techniques like jailbreaks, prompt injections and more. You’ll also gain practical skills in defending against these threats at the model, prompt, system and service levels, using structured frameworks to assess and strengthen LLM security.
Write your awesome label here.
Vladislav TushkanovGroup Manager at Kaspersky AI Technology Research Center
Vladislav has been with Kaspersky since 2015. He and his team work at applying data science and machine learning techniques to detect threats — such as malware, phishing and spam — faster and better, as, well as researching cutting-edge AI technologies to predict threats that are yet to come.