banner

Adversaries can deliberately confuse or even “poison” artificial intelligence (AI) systems to make them malfunction—and there’s no foolproof defense that their developers can employ. Computer scientists from the National Institute of Standards and Technology (NIST) and their collaborators identify these and other vulnerabilities of AI and machine learning (ML) in a new publication.

Their work, titled Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations, is part of NIST’s broader effort to support the development of trustworthy AI, and it can help put NIST’s AI Risk Management Framework into practice. The publication, a collaboration among government, academia, and industry, is intended to help AI developers and users get a handle on the types of attacks they might expect along with approaches to mitigate them—with the understanding that there is no silver bullet.

“We are providing an overview of attack techniques and methodologies that consider all types of AI systems,” said NIST computer scientist Apostol Vassilev, one of the publication’s authors. “We also describe current mitigation strategies reported in the literature, but these available defenses currently lack robust assurances that they fully mitigate the risks. We are encouraging the community to come up with better defenses.”

banner

Converter

Source: CurrencyRate
Top Selling Multipurpose WP Theme

Newsletter

Subscribe my Newsletter for new blog posts, tips & new photos. Let's stay updated!

banner

Leave a Comment

Layer 1
Your Crypto & Blockchain Beacon

CryptoInsightful

Welcome to CryptoInsightful.com, your trusted source for in-depth analysis, news, and insights into the world of cryptocurrencies, blockchain technology, NFTs (Non-Fungible Tokens), and cybersecurity. Our mission is to empower you with the knowledge and understanding you need to navigate the rapidly evolving landscape of digital assets and emerging technologies.