The battle of algorithms: discovering offensive AI


As machine learning applications become more widespread, a new era of cyber threats is emerging, one that uses offensive artificial intelligence (AI) to energize attack campaigns. Offensive AI allows attackers to automate recognition, create tailored impersonation attacks, and even auto-propagate to avoid detection. Security teams can prepare by turning to defensive AI to fight back, using autonomous cyber defense that learns on the fly to detect and respond to the most subtle indicators of an attack, no matter where it appears.

Marcy Rizzo, of MIT Technology Review, interviews Marcus Fowler and Max Heinemeyer of Darktrace in January 2021.

MIT Technology Review recently spoke with experts at Darktrace – Marcus Fowler, Director of Strategic Threat, and Max Heinemeyer, Director of Threat Hunting – to discuss current and emerging applications of offensive AI, Defensive AI and the algorithmic battle continues between the two.

Register to watch the webcast.

This content was produced by Insights, the personalized content arm of MIT Technology Review. It was not written by the editorial staff of MIT Technology Review.

Leave a Reply

Your email address will not be published. Required fields are marked *