Microsoft has released a tool to prevent AI hacking

Microsoft has released a tool to prevent AI hacking

Microsoft has released Counterfit, an automated open-source tool designed to help companies assess the cybersecurity level of machine learning-based systems. 

The Counterfit project is available on GitHub and includes a command-line tool and general automation layer to enable developers to simulate cyberattacks against AI systems.

Anyone can download the tool and install it through the Azure Shell, run it in a browser, or locally in the Anaconda Python environment.

Counterfit is capable of evaluating AI models in various cloud environments, on-premises or at the edge. The tool is independent of AI models and also supports different types of data, including text, images, or general input.

Information security professionals can use Counterfit to test for penetration and aggregation of artificial intelligence systems, scan artificial intelligence systems for vulnerabilities and register attacks on AI models.

Counterfit was a set of special scripts for simulating attacks on various AI models. Microsoft initially used scripts for internal tests, but over time, Counterfit has evolved into an automated tool that allows you to test attacks on multiple AI models at once.

See how to protect yourself here

Catch up on more stories here


Must read


Related Posts