Por favor, use este identificador para citar o enlazar este ítem: https://doi.org/10.1145/3531011

Registro completo de metadatos
Campo DCValorLengua/Idioma
dc.contributor.authorMuñoz-Martínez, Francisco-
dc.contributor.authorAbellán, José L.-
dc.contributor.authorAcacio, Manuel E.-
dc.contributor.authorKrishna, Tushar-
dc.contributor.otherFacultades, Departamentos, Servicios y Escuelas::Departamentos de la UMU::Ingeniería y Tecnología de Computadoreses
dc.date.accessioned2024-01-23T13:27:25Z-
dc.date.available2024-01-23T13:27:25Z-
dc.date.issued2023-09-08-
dc.identifier.citationACM Journal on Emerging Technologies in Computing Systems, Vol. 19, No. 4, Septiembre 2023.es
dc.identifier.issnPrint: 1550-4832-
dc.identifier.issnElectronic: 1550-4840-
dc.identifier.urihttp://hdl.handle.net/10201/137602-
dc.description© 2023. Association for Computing Machinery (ACM)  . This document is made available under the CC-BY 4.0 license http://creativecommons.org/licenses/by /4.0/ This document is the published version of a published work that appeared in final form in ACM Journal on Emerging Technologies in Computing Systems (JETC)es
dc.description.abstractIncreasing deployment of Deep Neural Networks (DNNs) recently fueled interest in the development of specific accelerator architectures capable of meeting their stringent performance and energy consumption requirements. DNN accelerators can be organized around three separate NoCs, namely distribution, multiplier, and reduction networks (or DN, MN, and RN, respectively) between the global buffer(s) and the compute units (multipliers/adders). Among them, the RN, used to generate and reduce the partial sums produced during DNN processing, is a first-order driver of the area and energy efficiency of the accelerator. RNs can be orchestrated to exploit a Temporal, Spatial or Spatio-Temporal reduction dataflow. Among these, Spatio-Temporal reduction is the one that has shown superior performance. However, as we demonstrate in this work, a state-of-the-art implementation of the Spatio-Temporal reduction dataflow, based on the addition of Accumulators (Ac) to the RN (i.e., RN+Ac strategy), can result into significant area and energy expenses. To cope with this important issue, we propose STIFT (that stands for Spatio-Temporal Integrated Folding Tree) that implements the Spatio-Temporal reduction dataflow entirely on the RN hardware substrate (i.e., without the need for the extra accumulators). STIFT results into significant area and power savings regarding the more complex RN+Ac strategy, at the same time its performance advantage is preserved.es
dc.formatapplication/pdfes
dc.format.extent19es
dc.languageenges
dc.publisherAssociation for Computing Machinery (ACM)es
dc.relationThis work was supported by grant RTI2018-098156-B-C53 funded by MCIN/AEI/10.13039/501100011033 and by “ERDF A way of making Europe”, NSF OAC 1909900 and US Department of Energy ARIAA co-design center. Also, it was supported by project grant PID2020-112827GB-I00 funded by MCIN/AEI/ 10.13039/501100011033. F. Muñoz-Martínez was supported by grant 20749/FPI/18 from Fundación Sénecaes
dc.rightsinfo:eu-repo/semantics/openAccesses
dc.rightsAtribución 4.0 Internacional*
dc.rights.urihttp://creativecommons.org/licenses/by/4.0/*
dc.subjectDeep Neural Networkses
dc.subjectDNN Acceleratorses
dc.subjectComputer Architecturees
dc.subjectNetworks-On-Chipes
dc.titleSTIFT: A Spatio-Temporal Integrated Folding Tree for Efficient Reductions in Flexible DNN Acceleratorses
dc.typeinfo:eu-repo/semantics/articlees
dc.relation.publisherversionhttps://dl.acm.org/doi/10.1145/3531011es
dc.identifier.doihttps://doi.org/10.1145/3531011-
Aparece en las colecciones:Artículos: Ingeniería y Tecnología de Computadores

Ficheros en este ítem:
Fichero Descripción TamañoFormato 
_JETC_Camera_Ready__RENIF_Project.pdf1,14 MBAdobe PDFVista previa
Visualizar/Abrir


Este ítem está sujeto a una licencia Creative Commons Licencia Creative Commons Creative Commons