adversarial attack (de.: adverser Angriff)
The deliberate use of adversarial examples to cause errors in the output of a model. Especially artificial neural networks are prone to this kind of attacks.
ISO/IEC TR 29119-11:2020 deliberate use of adversarial examples (3.1.7) to cause a ML model (3.1.46) to fail
ISTQB - CTAI Syllabus The deliberate use of adversarial examples to cause an ML model to fail
Source: AI-Glossary.org (
https://www.ai-glossary.org), License of definition text (excl. standard references): CC BY-SA 4.0, accessed: 2024-11-21
BibTeX-Information
@misc{aiglossary_adversarialattack_18g4v37,
author = {{AI-Glossary.org}},
title = {{adversarial attack}},
howpublished = "https://www.ai-glossary.org/index.php?p=18g4v37\&l=en",
year = "2024",
note = "online, accessed: 2024-11-21"
}