Testing approach based on the attempted creation and execution of adversarial examples to identify defects in a model in order to increase the robustness and reduce the fault tolerance.
Note: Typically applied to neural networks.
ISO/IEC TR 29119-11:2020 testing approach based on the attempted creation and execution of adversarial examples (3.1.7) to identify defects in an ML model (3.1.46)
Note 1 to entry: Typically applied to ML models in the form of a neural network (3.1.48).