LLM-powered Threat Modeling vs Security Testing

Description

This talk explores how threat modeling assisted by Large Language Models (LLMs) can shift security verification to the left in the software development lifecycle. By leveraging LLMs to analyze system architectures, it becomes possible to identify potential vulnerabilities and generate realistic attack paths and scenarios. This approach automates the transformation of abstract threat models into actionable test cases and mitigations at the design stage.

Through LLM-powered threat modeling, security validation evolves from a reactive process to a predictive one, anticipating and mitigating threats before they can be exploited in the wild.

Speaker

Oleksandr Adamov

Founder and CEO of NioGuard Security Lab

Event Details
Date

Thursday, December 11, 2025

Time

11:30 - 12:30

Location

Online Webinar