Brian Austin

December 6, 2023

Are you protecting your new AI project?

The folks over at App Economy Insights break down the biggest security threats from AI in 2024. At a high-level: Adversarial AI has been area of focus in security for over a decade and modern algorithms aid attacks in four main areas: Speed, Scale, Scope, Sophistication.  

The widespread adoption of LLMs creates additional vectors for exploitation. Prompt attacks and training data extraction risk expose sensitive data that may have been used to train models. Backdoor models and data poisoning attacks alter behavior and output of models. Outright theft of expensive to build training models is a major risk to companies. 

Security provides already focus on AI driven attacks against humans or source code, but it's important for developers to consider vulnerabilities against the AI itself. Properly scrubbing input, auditing of open-source libraries and verification of training data sets is a critical part of AI tool development.

Despite the click-bait title this video does a good job at explaining these threats in under 15 minutes.

About Brian Austin

| AI Dev Tool Research | Engineering Leadership | Tech Lead Manager | Software Architect |

This newsletter syndicates all of my LinkedIn content.

For AI project updates check out Project Maestro on Substack.

All other links at https://bit.ly/m/bwaustin