Por favor, use este identificador para citar o enlazar este ítem: https://doi.org/10.1109/ACCESS.2022.3224930

Registro completo de metadatos
Campo DCValorLengua/Idioma
dc.contributor.authorPerales Gómez, Ángel Luis-
dc.contributor.authorFernández Maimó, Lorenzo-
dc.contributor.authorGarcía Clemente, Félix J.-
dc.contributor.authorMaroto Morales, Alejandro-
dc.contributor.authorHuertas Celdrán, Alberto-
dc.contributor.authorBovet, Gérôme-
dc.date.accessioned2024-06-28T07:53:05Z-
dc.date.available2024-06-28T07:53:05Z-
dc.date.issued2022-11-28-
dc.identifier.citationIEEE Access, Volume 10 pp.: 124582 - 124594es
dc.identifier.issn2169-3536-
dc.identifier.urihttp://hdl.handle.net/10201/142730-
dc.description© 2022. The authors. This document is made available under the CC-BY-SA 4.0 license http://creativecommons.org/licenses/by-sa /4.0/ This document is the published version of a published work that appeared in final form in IEEE Access. To access the final work, see DOI: https://doi.org/10.1109/ACCESS.2022.3224930-
dc.description.abstractAnomaly Detection systems based on Machine and Deep learning are the most promising solutions to detect cyberattacks in the industry. However, these techniques are vulnerable to adversarial attacks that downgrade prediction performance. Several techniques have been proposed to measure the robustness of Anomaly Detection in the literature. However, they do not consider that, although a small perturbation in an anomalous sample belonging to an attack, i.e., Denial of Service, could cause it to be misclassified as normal while retaining its ability to damage, an excessive perturbation might also transform it into a truly normal sample, with no real impact on the industrial system. This paper presents a methodology to calculate the robustness of Anomaly Detection models in industrial scenarios. The methodology comprises four steps and uses a set of additional models called support models to determine if an adversarial sample remains anomalous. We carried out the validation using the Tennessee Eastman process, a simulated testbed of a chemical process. In such a scenario, we applied the methodology to both a Long-Short Term Memory (LSTM) neural network and 1-dimensional Convolutional Neural Network (1D-CNN) focused on detecting anomalies produced by different cyberattacks. The experiments showed that 1D-CNN is significantly more robust than LSTM for our testbed. Specifically, a perturbation of 60% (empirical robustness of 0.6) of the original sample is needed to generate adversarial samples for LSTM, whereas in 1D-CNN the perturbation required increases up to 111% (empirical robustness of 1.11).es
dc.formatapplication/pdfes
dc.languageenges
dc.publisherIEEE-
dc.relationThis work was supported by the Spanish Ministry of Science, Innovation and Universities, State Research Agency, FEDER Funds, under Grant RTI2018-095855-B-I00; by the Swiss Federal Office for Defense Procurement (Armasuisse) with the CyberSpec under Grant CYD-C-2020003; and by the European Commission Horizon 2020 Programme under grant agreement number H2020-SU-DS-2019/883335 - PALANTIR (Practical Autonomous Cyberhealth for resilient SMEs & Microenterprises), and the European Commission (FEDER/ERDF).es
dc.rightsinfo:eu-repo/semantics/openAccesses
dc.rightsAtribución-CompartirIgual 4.0 Internacional*
dc.rights.urihttp://creativecommons.org/licenses/by-sa/4.0/*
dc.titleA methodology for evaluating the robustness of anomaly detectors to adversarial attacks in industrial scenarioses
dc.typeinfo:eu-repo/semantics/articlees
dc.relation.publisherversionhttps://ieeexplore.ieee.org/document/9964189-
dc.identifier.doihttps://doi.org/10.1109/ACCESS.2022.3224930-
Aparece en las colecciones:Artículos: Ingeniería y Tecnología de Computadores



Este ítem está sujeto a una licencia Creative Commons Licencia Creative Commons Creative Commons