Por favor, use este identificador para citar o enlazar este ítem: https://doi.org/10.1515/jib-2024-0048

Registro completo de metadatos
Campo DCValorLengua/Idioma
dc.contributorBernabé García, Gregorio-
dc.contributor.authorHaro Orenes, Salvador de-
dc.contributor.authorBernabé García, Gregorio-
dc.contributor.authorGarcía Carrasco, José Manuel-
dc.contributor.authorGonzález Férez, Pilar-
dc.date.accessioned2025-06-23T08:09:33Z-
dc.date.available2025-06-23T08:09:33Z-
dc.date.issued2025-06-04-
dc.identifier.citationJournal of Integrative Bioinformatics 2025; 20240048es
dc.identifier.issnElectronic: 1613-4516-
dc.identifier.urihttp://hdl.handle.net/10201/156440-
dc.description© 2025 the author(s). This manuscript version is made available under the CC-BY 4.0 license http://creativecommons.org/licenses/by/4.0/. This document is the Published version of a Published Work that appeared in final form in Journal of Integrative Bioinformatics . To access the final edited and published work see https://doi.org/10.1515/jib-2024-0048-
dc.description.abstractLeft ventricular non-compaction is a cardiac condition marked by excessive trabeculae in the left ventricle’s inner wall. Although various methods exist to measure these structures, the medical community still lacks consensus on the best approach. Previously, we developed DL-LVTQ, a tool based on a UNet neural network, to quantify trabeculae in this region. In this study, we expand the dataset to include new patients with Titin cardiomyopathy and healthy individuals with fewer trabeculae, requiring retraining of our models to enhance predictions. We also propose ViTUNeT, a neural network architecture combining U-Net and Vision Transformers to segment the left ventricle more accurately. Additionally, we train a YOLOv8 model to detect the ventricle and integrate it with ViTUNeT model to focus on the region of interest. Results from ViTUNet and YOLOv8 are similar to DL-LVTQ, suggesting dataset quality limits further accuracy improvements. To test this, we analyze MRI images and develop a method using two YOLOv8 models to identify and remove problematic images, leading to better results. Combining YOLOv8 with deep learning networks offers a promising approach for improving cardiac image analysis and segmentation.-
dc.formatapplication/pdfes
dc.format.extent16es
dc.languageenges
dc.publisherDe Gruyter-
dc.relationThis work has been partially funded by Grant TED2021-129221B-I00, funded by MCIN/AEI/10.13039/501100011033 and by the “European Union NextGenerationEU/PRTR”. This work has been carried out in collaboration with the Hospitals Virgen of Arrixaca and Vall d’Hbron in Murcia and Barcelona (Spain).es
dc.rightsinfo:eu-repo/semantics/openAccesses
dc.rightsAtribución 4.0 Internacional*
dc.rights.urihttp://creativecommons.org/licenses/by/4.0/*
dc.subjectData analysis-
dc.subjectImage detection-
dc.subjectLeft ventricular non-compaction diagnosis-
dc.subjectMedical imaging-
dc.subjectConvolutional neural networks-
dc.titleA ViTUNeT-based model using YOLOv8 for efficient LVNC diagnosis and automatic cleaning of datasetes
dc.typeinfo:eu-repo/semantics/articlees
dc.relation.publisherversionhttps://www.degruyterbrill.com/document/doi/10.1515/jib-2024-0048/html-
dc.identifier.doihttps://doi.org/10.1515/jib-2024-0048-
dc.contributor.departmentDepartamento de Ingeniería y Tecnología de Computadoreses
Aparece en las colecciones:Artículos

Ficheros en este ítem:
Fichero Descripción TamañoFormato 
10.1515_jib-2024-0048.pdf3,35 MBAdobe PDFVista previa
Visualizar/Abrir


Este ítem está sujeto a una licencia Creative Commons Licencia Creative Commons Creative Commons