Por favor, use este identificador para citar o enlazar este ítem: https://doi.org/10.1371/journal.pone.0209547

Registro completo de metadatos
Campo DCValorLengua/Idioma
dc.contributor.authorMiñarro Giménez, José Antonio-
dc.contributor.authorMartínez Costa, Catalina-
dc.contributor.authorKarlsson, Daniel-
dc.contributor.authorSchulz, Stefan-
dc.contributor.authorGøeg, Kristine Rosenbeck-
dc.date.accessioned2025-02-02T17:56:25Z-
dc.date.available2025-02-02T17:56:25Z-
dc.date.issued2018-12-27-
dc.identifier.citationPLoS One, 2018, Vol. 13(12) : e0209547es
dc.identifier.issnElectronic: 1932-6203-
dc.identifier.urihttp://hdl.handle.net/10201/149949-
dc.description© 2018 Miñarro-Gimébez et al. This manuscript version is made available under the CC-BY 4.0 license http://creativecommons.org/licenses/by/4.0/ This document is the Published Manuscript version of a Published Work that appeared in final form in PLoS ONE. To access the final edited and published work see https://doi.org/10.1371/journal.pone.0209547-
dc.description.abstractSNOMED CT provides about 300,000 codes with fine-grained concept definitions to support interoperability of health data. Coding clinical texts with medical terminologies it is not a trivial task and is prone to disagreements between coders. We conducted a qualitative analysis to identify sources of disagreements on an annotation experiment which used a subset of SNOMED CT with some restrictions. A corpus of 20 English clinical text fragments from diverse origins and languages was annotated independently by two domain medically trained annotators following a specific annotation guideline. By following this guideline, the annotators had to assign sets of SNOMED CT codes to noun phrases, together with concept and term coverage ratings. Then, the annotations were manually examined against a reference standard to determine sources of disagreements. Five categories were identified. In our results, the most frequent cause of inter-annotator disagreement was related to human issues. In several cases disagreements revealed gaps in the annotation guidelines and lack of training of annotators. The reminder issues can be influenced by some SNOMED CT features.es
dc.formatapplication/pdfes
dc.format.extent15es
dc.languageenges
dc.publisherPublic Library of Sciencees
dc.relationThis paper is produced as part of ASSESS CT which is funded from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 643818 to S.Ses
dc.rightsinfo:eu-repo/semantics/openAccesses
dc.rightsAtribución 4.0 Internacional*
dc.rights.urihttp://creativecommons.org/licenses/by/4.0/*
dc.titleQualitative analysis of manual annotations of clinical text with SNOMED CTes
dc.typeinfo:eu-repo/semantics/articlees
dc.relation.publisherversionhttps://journals.plos.org/plosone/article?id=10.1371/journal.pone.0209547-
dc.identifier.doihttps://doi.org/10.1371/journal.pone.0209547-
dc.contributor.departmentDepartamento de Informática y Sistemas-
Aparece en las colecciones:Artículos

Ficheros en este ítem:
Fichero Descripción TamañoFormato 
PlosOne.pdf615,2 kBAdobe PDFVista previa
Visualizar/Abrir


Este ítem está sujeto a una licencia Creative Commons Licencia Creative Commons Creative Commons