Por favor, use este identificador para citar o enlazar este ítem:
https://doi.org/10.1016/j.jmaa.2020.124584
Twittear
Registro completo de metadatos
Campo DC | Valor | Lengua/Idioma |
---|---|---|
dc.contributor.author | Almira, José M. | - |
dc.contributor.author | López-de-Teruel, Pedro E. | - |
dc.contributor.author | Romero-López, Diego J. | - |
dc.contributor.author | Voigtlaender, Felix | - |
dc.contributor.other | Facultades, Departamentos, Servicios y Escuelas::Departamentos de la UMU::Ingeniería y Tecnología de Computadores | es |
dc.date.accessioned | 2024-02-07T13:29:23Z | - |
dc.date.available | 2024-02-07T13:29:23Z | - |
dc.date.issued | 2021-02-01 | - |
dc.identifier.citation | Journal of Mathematical Analysis and Applications, Volume 494, Issue 1, 1 February 2021 | es |
dc.identifier.issn | Print: 0022-247X | - |
dc.identifier.issn | Electrónico: 1096-0813 | - |
dc.identifier.uri | http://hdl.handle.net/10201/138913 | - |
dc.description | ©<2021>. This manuscript version is made available under the CC-BY-NC-ND license http://creativecommons.org/licenses/ccby-nc-nd/4.0/ This document is the Acepted version of a Published Work that appeared in final form in [Journal of Mathematical Analysis and Applications]. To access the final edited and published work see [https://doi.org/ 10.1016/j.jmaa.2020.124584] | - |
dc.description.abstract | We prove a negative result for the approximation of functions defined on compact subsets of R^d (where d >=2) using feedforward neural networks with one hidden layer and arbitrary continuous activation function. In a nutshell, this result claims the existence of target functions that are as difficult to approximate using these neural networks as one may want. We also demonstrate an analogous result (for general d in N) for neural networks with an arbitrary number of hidden layers, for activation functions that are either rational functions or continuous splines with finitely many pieces. | es |
dc.format | application/pdf | es |
dc.format.extent | 12 páginas | es |
dc.language | eng | es |
dc.publisher | Elsevier | es |
dc.relation | Sin financiación externa a la Universidad | es |
dc.relation.replaces | https://arxiv.org/pdf/1810.10032.pdf | es |
dc.rights | info:eu-repo/semantics/openAccess | es |
dc.rights | Attribution-NoDerivatives 4.0 Internacional | * |
dc.rights.uri | http://creativecommons.org/licenses/by-nd/4.0/ | * |
dc.subject | Feedforward neural networks | es |
dc.subject | Function approximation | - |
dc.subject | Compact sets | - |
dc.subject.other | CDU::5 - Ciencias puras y naturales::51 - Matemáticas | es |
dc.subject.other | CDU::6 - Ciencias aplicadas::62 - Ingeniería. Tecnología | es |
dc.title | Negative results for approximation using single layer and multilayer feedforward neural networks | es |
dc.type | info:eu-repo/semantics/article | es |
dc.relation.publisherversion | https://www.sciencedirect.com/science/article/pii/S0022247X20307460 | es |
dc.identifier.doi | https://doi.org/10.1016/j.jmaa.2020.124584 | - |
Aparece en las colecciones: | Artículos: Ingeniería y Tecnología de Computadores |
Ficheros en este ítem:
Fichero | Descripción | Tamaño | Formato | |
---|---|---|---|---|
1810.10032.pdf | 214,19 kB | Adobe PDF | Visualizar/Abrir |
Este ítem está sujeto a una licencia Creative Commons Licencia Creative Commons