LLM-powered Threat Modeling vs Security Testing

LLM-powered Threat Modeling vs Security Testing

Date

Thursday, December 11, 2025

Time

11:30 12:30

Location

Online Webinar

LLM-powered Threat Modeling vs Security Testing

This talk explores how threat modeling assisted by Large Language Models (LLMs) can shift security verification to the left in the software development lifecycle. By leveraging LLMs to analyze system architectures, it becomes possible to identify potential vulnerabilities and generate realistic attack paths and scenarios. This approach automates the transformation of abstract threat models into actionable test cases and mitigations at the design stage.

Through LLM-powered threat modeling, security validation evolves from a reactive process to a predictive one, anticipating and mitigating threats before they can be exploited in the wild.

Price: FREE

Dr. Alexander (Oleksandr) Adamov

Dr. Alexander (Oleksandr) Adamov** is the Founder and CEO of NioGuard Security Lab, specializing in the application of AI to cybersecurity. With over two decades of experience in cyberattack analysis, he currently teaches at Blekinge Institute of Technology (BTH). He also serves as a member of the European Cybercrime Training and Education Group (ECTEG) and the Anti-Malware Testing Standards Organization (AMTSO). Dr. Adamov frequently speaks at international conferences, including Virus Bulletin, DeepSec, OWASP, and BSides.