adversarial testing (de.: adverses Testen)

Testing approach based on the attempted creation and execution of adversarial examples to identify defects in a model in order to increase the robustness and reduce the fault tolerance.

Note: Typically applied to neural networks.

ISO/IEC TR 29119-11:2020 testing approach based on the attempted creation and execution of adversarial examples (3.1.7) to identify defects in an ML model (3.1.46)

Note 1 to entry: Typically applied to ML models in the form of a neural network (3.1.48).

Source: AI-Glossary.org (https://www.ai-glossary.org), License of definition text (excl. standard references): CC BY-SA 4.0, accessed: 2024-11-23

BibTeX-Information

@misc{aiglossary_adversarialtesting_18r94ce,
author = {{AI-Glossary.org}},
title = {{adversarial testing}},
howpublished = "https://www.ai-glossary.org/index.php?p=18r94ce\&l=en",
year = "2024",
note = "online, accessed: 2024-11-23" }