Repository logo
  • English
  • Čeština
  • Deutsch
  • Español
  • Français
  • Gàidhlig
  • Latviešu
  • Magyar
  • Nederlands
  • Português
  • Português do Brasil
  • Suomi
  • Svenska
  • Türkçe
  • Қазақ
  • বাংলা
  • हिंदी
  • Ελληνικά
  • Log In
    New user? Click here to register.
Repository logo

Repositorio Institucional de la Universidad de Murcia

Repository logoRepository logo
  • Communities & Collections
  • All of DSpace
  • menu.section.collectors
  • menu.section.acerca
  • English
  • Čeština
  • Deutsch
  • Español
  • Français
  • Gàidhlig
  • Latviešu
  • Magyar
  • Nederlands
  • Português
  • Português do Brasil
  • Suomi
  • Svenska
  • Türkçe
  • Қазақ
  • বাংলা
  • हिंदी
  • Ελληνικά
  • Log In
    New user? Click here to register.
  1. Home
  2. browse.metadata.contributordepartment.breadcrumbs

Browsing by browse.metadata.contributordepartment "Ingeniería de la Información y las Comunicaciones"

Now showing 1 - 20 of 161
Results Per Page
Sort Options
  • Loading...
    Thumbnail Image
    Publication
    Open Access
    A Comprehensive Model for Securing Sensitive Patient Data in a Clinical Scenario
    (IEEE, 2023-11-30) López Martínez, Antonio; Gil Pérez, Manuel; Ruiz-Martínez, Antonio; Ingeniería de la Información y las Comunicaciones
    The clinical environment is one of the most important sources of sensitive patient data in healthcare. These data have attracted cybercriminals who pursue the theft of this information for personal gain. Therefore, protecting these data is a critical issue. This paper focuses on an analysis of the clinical environment, presents its general ecosystem and stakeholders, and inspects the main protocols implemented between the clinical components from a security and privacy perspective. Additionally, this article defines a complete use case to describe the typical workflow within a clinical setting: the life cycle of a patient sample. Moreover, we present and categorize crucial clinical information and divide it into two sensitivity levels: High and Very Sensitive, while considering the severe risks of cybercriminal access. The threat model for the use case has also been identified, in conjunction with the use case’s security and privacy needs. This work served us as basis to develop the minimum security and privacy requirements to protect the use case. Accordingly, we have defined protection mechanisms for each sensitivity level with the enabling technologies needed to satisfy each requirement. Finally, the main challenges and future steps for the use case are presented.
  • Loading...
    Thumbnail Image
    Publication
    Embargo
    A Comprehensive Review of the State-of-the-Art on Security and Privacy Issues in Healthcare
    (2023-03-28) López Martínez, Antonio; Gil Pérez, Manuel; Ruiz-Martínez, Antonio; Ingeniería de la Información y las Comunicaciones
    Currently, healthcare is critical environment in our society, which attracts attention tomalicious activities and has caused an important number of damaging attacks. In parallel, the recent advancements in technologies, computing systems, and wireless communications are changing healthcare environment by adding different improvements and complexity to it. This article reviews the current state of the literature and provides a holistic view of cybersecurity in healthcare. With this purpose in mind, the article enumerates the main stakeholders and architecture implemented in the healthcare environment, as well as the main security issues (threats, attacks, etc.) produced in healthcare. In this context, this work maps the threats collected with a widely used knowledge-based framework, MITRE ATT&CK, building a contribution not seen so far. This article also enumerates the security mechanisms created to protect healthcare, identifying the principal research lines addressed in the literature, and listing the available public security-focused datasets used in machine-learning to provide security in the medical domain. To conclude, the research challenges that need to be addressed for future research works in this area are presented.
  • Loading...
    Thumbnail Image
    Publication
    Open Access
    A differential structure approach to membrane segmentation in electron tomography
    (2011-09) Martinez-Sanchez, Antonio; Inmaculada, García; Fernández, Jose-Jesús; Ingeniería de la Información y las Comunicaciones
    Electron tomography allows three-dimensional visualization of cellular landscapes in molecular detail. Segmentation is a paramount stage for the interpretation of the reconstructed tomograms. Although several computational approaches have been proposed, none has prevailed as a generic method and thus segmentation through manual annotation is still a common choice. In this work we introduce a segmentation method targeted at membranes, which define the natural limits of compartments within biological specimens. Our method is based on local differential structure and on a Gaussian-like membrane model. First, it isolates information through scale-space and finds potential membrane-like points at a local scale. Then, the structural information is integrated at a global scale to yield the definite segmentation. We show and validate the performance of the algorithm on a number of tomograms under different experimental conditions.
  • Loading...
    Thumbnail Image
    Publication
    Embargo
    A fuzzy K-nearest neighbor classifier to deal with imperfect data
    (Springer-Verlag, 2017-04-01) Cadenas, Jose M.; Garrido Carrera, María del Carmen; Martínez, Raquel; Muñoz, Enrique; Bonissone, Piero P.; Ingeniería de la Información y las Comunicaciones
    The k-nearest neighbors method (kNN) is a nonparametric, instance-based method used for regression and classification. To classify a new instance, the kNN method computes its k nearest neighbors and generates a class value from them. Usually, this method requires that the information available in the datasets be precise and accurate, except for the existence of missing values. However, data imperfection is inevitable when dealing with real-world scenarios. In this paper, we present the kNNimp classifier, a k-nearest neighbors method to perform classification from datasets with imperfect value. The importance of each neighbor in the output decision is based on relative distance and its degree of imperfection. Furthermore, by using external parameters, the classifier enables us to define the maximum allowed imperfection, and to decide if the final output could be derived solely from the greatest weight class (the best class) or from the best class and a weighted combination of the closest classes to the best one. To test the proposed method, we performed several experiments with both synthetic and realworld datasets with imperfect data. The results, validated through statistical tests, show that the kNNimp classifier is robust when working with imperfect data and maintains a good performance when compared with other methods in the literature, applied to datasets with or without imperfection.
  • Loading...
    Thumbnail Image
    Publication
    Open Access
    A Fuzzy k-Nearest Neighbors Classifier to Deal with Imperfect Data
    (Springer-Verlag Berlin Heidelberg, 2018) Cadenas Figueredo, J.M.; Garrido Carrera, María del Carmen; Martínez España, R.; Muñoz, E.; Bonissone, P.; Ingeniería de la Información y las Comunicaciones
    The k-nearest neighbors method (kNN) is a nonparametric, instance-based method used for regression and classification. To classify a new instance, the kNN method computes its k nearest neighbors and generates a class value from them. Usually, this method requires that the information available in the datasets be precise and accurate, except for the existence of missing values. However, data imperfection is inevitable when dealing with real-world scenarios. In this paper, we present the kNNimp classifier, a k-nearest neighbors method to perform classification from datasets with imperfect value. The importance of each neighbor in the output decision is based on relative distance and its degree of imperfection. Furthermore, by using external parameters, the classifier enables us to define the maximum allowed imperfection, and to decide if the final output could be derived solely from the greatest weight class (the best class) or from the best class and a weighted combination of the closest classes to the best one. To test the proposed method, we performed several experiments with both synthetic and realworld datasets with imperfect data. The results, validated through statistical tests, show that the kNNimp classifier is robust when working with imperfect data and maintains a good performance when compared with other methods in the literature, applied to datasets with or without imperfection.
  • Loading...
    Thumbnail Image
    Publication
    Restricted
    A hybrid neuro-fuzzy inference system-based algorithm for time series forecasting applied to energy consumption prediction
    (Elsevier, 2020-04-21) Jallal, Mohammed Ali; González Vidal, Aurora; Skarmeta Gómez, Antonio; Chabaa, Samira; Zeroual, Abdelouhab; Ingeniería de la Información y las Comunicaciones; Facultades de la UMU::Facultad de Informática
    The accuracy of the prediction of buildings’ energy consumption is being tackled using existing artificial intelligence techniques. However, there is a lack of effort on the development of new techniques for solving that problem and, therefore, achieving higher performance, which is important for the efficient management of energy in many levels. This study addresses this gap by proposing a new hybrid machine learning algorithm that incorporates the adaptive neuro-fuzzy inference system model with a new version of the firefly algorithm denominated as the gender-difference firefly algorithm. We expanded the search space diversification to increase the accuracy on the prediction and adopted the autoregressive process in order to approximate the chaotic behavior of the consumption time series. A new layer, denominated as non-working time adaptation was also integrated so as to decrease the fast variability of the predictions during non-working periods of time. We have applied our algorithm for the consumption prediction on 1 h, 2 h and 3 h ahead horizons. We have obtained improvements on the MAPE and R coefficient when compared with state-of-the-art publications in both a private dataset from the Faculty of Chemistry, located in the city of Murcia, Spain and a public dataset of the consumption of a Retail building located in California, United States. We also show our method’s performance in five more buildings. Our results demonstrate the robustness and the accuracy of our proposal when compared to the traditional adaptive neuro-fuzzy inference system models and also to the different predictive techniques implemented in several pieces of literature.
  • Loading...
    Thumbnail Image
    Publication
    Open Access
    A k-nearest neighbors based approach applied to more realistic activity recognition datasets
    (IOS Press, 2018) Cadenas Figueredo, J. M.; Garrido Carrera, María del Carmen; Martínez España, R.; Muñoz, A.; Ingeniería de la Información y las Comunicaciones
    Due to the latest technological advances, the current society has the possibility to store large volumes of data in the majority of the problems of the daily life. These data are useless if there is not a set of techniques available to analyze them with the objective of obtaining knowledge that facilitates the problem resolution. This paper focuses on the techniques provided by data mining as a tool for intelligent data analysis in the field of human activity recognition, specifically in the application of two techniques of data mining capable of carrying out the extraction of knowledge from data that are not as accurate and exact as desirable. This type of data reflects the true nature of the information collected on a day-to-day basis. The proposed techniques allow us to perform a preprocessing of the data by means of an instance selection that improves the computational requirements of the system response, obtaining satisfactory accuracy results. Several experiments are carried out on a real world dataset and various datasets obtained from the previous one in a synthetic way to simulate more realistic datasets that illustrate the potential of the techniques proposed.
  • Loading...
    Thumbnail Image
    Publication
    Open Access
    A lightweight acquisition of expert rules for interoperable clinical decision support system
    (2019) Cánovas-Segura, Bernardo; Morales, Antonio; Juarez, Jose M.; Campos Martínez, Manuel; Palacios, Francisco; Ingeniería de la Información y las Comunicaciones
    The process of adding new knowledge in the form of rules to already running Clinical Decision Support Systems (CDSSs) in hospitals is extremely costly and time consuming. There are two principal limitations: (1) the lack of a broad consensus regarding a uniform representation of clinical rules; and (2) the integration of new rule-based knowledge into hospital information systems. Objective: To provide a guideline with which to support knowledge acquisition for rule-based CDSSs and to facilitate the integration of that knowledge into hospital datasets using standard clinical terminologies and ontologies as reference elements. Materials and Methods: We have designed a straightforward 4-step methodology with which to incorporate the external knowledge sources and data integration required to run CDSSs in hospitals. This lightweight methodology is based on a reference ontology that integrates standard clinical terminologies and its objective is to effectively acquire procedural knowledge in the form of rules. Results: We have applied the methodology in the context of antimicrobial stewardship at a hospital. Recommendations from the European Committee on Antimicrobial Susceptibility Testing (EUCAST) were added to WASPSS, a CDSS running at the hospital. The reference ontology combines a subset of ATC terminologies for antibiotics and those of NCBI for microorganisms, including 584 and 1714 concepts, respectively. A total of 94 new rules were added to the CDSS so as to represent EUCAST knowledge. We also evaluated different implementations in order to study their scalability, during which time we analysed Drools 7.5 as a production rule engine, HermiT as an ontology reasoner and RuQAR as an integration tool. Our experiments show that the combination of a production rule engine and an ontology reasoner in runtime is more efficient than using a single rule engine with a knowledge base derived from the reference ontology (1.9 times faster than the next approach when executing 1000 expert rules on an ontology of 1000 concepts). Discussion: The methodology proposed helped to implement the knowledge acquisition process of EUCAST rules in a running CDSS. This methodology is applicable to other clinical domains when knowledge can be modelled with rules. Since it is a lightweight methodology, different implementation strategies are possible. The use of clinical standards also facilitates the future interoperation between CDSSs, particularly when using SNOMED as a reference ontology and employing future rule-sharing standards.
  • Loading...
    Thumbnail Image
    Publication
    Open Access
    A Methodology Based on Machine Learning and Soft Computing to Design More Sustainable Agriculture Systems
    (MDPI, 2023) Cadenas Figuerero, J.M.; Garrido Carrera, María del Carmen; Martínez España, R.; Ingeniería de la Información y las Comunicaciones
    Advances in new technologies are allowing any field of real life to benefit from using these ones. Among of them, we can highlight the IoT ecosystem making available large amounts of information, cloud computing allowing large computational capacities, and Machine Learning techniques together with the Soft Computing framework to incorporate intelligence. They constitute a powerful set of tools that allow us to define Decision Support Systems that improve decisions in a wide range of real-life problems. In this paper, we focus on the agricultural sector and the issue of sustainability. We propose a methodology that, starting from times series data provided by the IoT ecosystem, a preprocessing and modelling of the data based on machine learning techniques is carried out within the framework of Soft Computing. The obtained model will be able to carry out inferences in a given prediction horizon that allow the development of Decision Support Systems that can help the farmer. By way of illustration, the proposed methodology is applied to the specific problem of early frost prediction. With some specific scenarios validated by expert farmers in an agricultural cooperative, the benefits of the methodology are illustrated. The evaluation and validation show the effectiveness of the proposal.
  • Loading...
    Thumbnail Image
    Publication
    Open Access
    A methodology for energy multivariate time series forecasting in smart buildings based on feature selection
    (Elsevier, 2019-05-10) González Vidal, Aurora; Jiménez Barrionuevo, Fernando; Skarmeta Gómez, Antonio; Ingeniería de la Información y las Comunicaciones; Facultades de la UMU::Facultad de Informática
    The massive collection of data via emerging technologies like the Internet of Things (IoT) requires finding optimal ways to reduce the created features that have a potential impact on the information that can be extracted through the machine learning process. The mining of knowledge related to a concept is done on the basis of the features of data. The process of finding the best combination of features is called feature selection. In this paper we deal with multivariate time-dependent series of data points for energy forecasting in smart buildings. We propose a methodology to transform the time-dependent database into a structure that standard machine learning algorithms can process, and then, apply different types of feature selection methods for regression tasks. We used Weka for the tasks of database transformation, feature selection, regression, statistical test and forecasting. The proposed methodology improves MAE by 59.97% and RMSE by 40.75%, evaluated on training data, and it improves MAE by 42.28% and RMSE by 36.62%, evaluated on test data, on average for 1-step-ahead, 2-step-ahead and 3-step-ahead when compared to not applying any feature selection methodology.
  • Loading...
    Thumbnail Image
    Publication
    Open Access
    A multimodal study of the interplay between stress, executive function, and biometrics in game-based assessment
    (Elsevier, 2024-05-15) Albaladejo González, Mariano; Gaspar Marco, Rubén; Tsai, Nancy; Gómez Mármol, Félix; Ruipérez Valiente, José A.; Ingeniería de la Información y las Comunicaciones
    Managing stress is a crucial soft skill that affects cognitive performance and health. Stress detection through biometrics can be used to improve and evaluate stress management. However, measuring the effects of stress on biometrics and executive functions is difficult and dependent on the individual. Despite these challenges, this paper presents a case study that collects a comprehensive multimodal dataset with two stress metrics, four biometric signals, and twenty-two executive function metrics from Game-based Assessment (GBA) trace data specifically designed for this purpose. The experiments suggest that biometrics, especially the heart rate and skin temperature, are effective predictors of stress. Additionally, noteworthy correlations were observed between heart rate and certain executive function variables. The levels of GBA that measured shifting and processing speed showed a higher heart rate than the response inhibition levels. This case study, together with the developed stress detectors, enables the detection of persons who struggle to manage stress and measure their executive function performance under stressful situations.
  • Loading...
    Thumbnail Image
    Publication
    Embargo
    A novel Machine Learning-based approach for the detection of SSH botnet infection
    (2021-02) Martínez Garre, José Tomás; Gil Pérez, Manuel; Ruiz-Martínez, Antonio; Ingeniería de la Información y las Comunicaciones
    Botnets are causing severe damages to users, companies, and governments through information theft, abuse of online services, DDoS attacks, etc. Although significant research is being made to detect them and mitigate their effect, they are exponentially increasing due to new zero-day attacks, a variation of their behavior, and obfuscation techniques. High Interaction Honeypots (HIH) are the only honeypots able to capture attacks and log all the information generated by attackers when setting up a botnet. The data generated is being processed using Machine Learning (ML) techniques for detection since they can detect hidden patterns. However, so far, research has been focused on intermediate phases of the botnet’s life cycle during operation, underestimating the initial phase of infection. To the best of our knowledge, this is the first solution in the infection phase of SSH-based botnets. Therefore, we have designed an approach based on an SSH-based HIH to generate a dataset consisting of executed commands and network information. Herein, we have applied ML techniques for the development of a real-time detection model. This approach reached a very high level of prediction and zero false negatives. Indeed, our system detected all known and unknown SSH sessions intended to infect our honeypots. Thus, our research has demonstrated that new SSH infections can be detected through ML techniques.
  • Loading...
    Thumbnail Image
    Publication
    Open Access
    A predictive model for hospitalization and survival to COVID‑19 in a retrospective population‑based study
    (Nature Research, 2022-10-28) Cisterna García, Alejandro; Guillén Teruel, Antonio; Caracena, Marcos; Pérez-Cuadrado Martínez, Enrique; Jiménez Barrionuevo, Fernando; Francisco Verdú, Francisco J.; Reina, Gabriel; González Billalabeitia, Enrique; Palma Méndez, José Tomás; Sánchez Ferrer, Álvaro; Botía Blaya, Juan Antonio; Ingeniería de la Información y las Comunicaciones; Facultades de la UMU::Facultad de Informática
    The development of tools that provide early triage of COVID-19 patients with minimal use of diagnostic tests, based on easily accessible data, can be of vital importance in reducing COVID-19 mortality rates during high-incidence scenarios. This work proposes a machine learning model to predict mortality and risk of hospitalization using both 2 simple demographic features and 19 comorbidities obtained from 86,867 electronic medical records of COVID-19 patients, and a new method (LR-IPIP) designed to deal with data imbalance problems. The model was able to predict with high accuracy (90–93%, ROC-AUC = 0.94) the patient's final status (deceased or discharged), while its accuracy was medium (71–73%, ROC-AUC = 0.75) with respect to the risk of hospitalization. The most relevant characteristics for these models were age, sex, number of comorbidities, osteoarthritis, obesity, depression, and renal failure. Finally, to facilitate its use by clinicians, a user-friendly website has been developed (https://alejandrocisterna.shinyapps.io/PROVIA).
  • Loading...
    Thumbnail Image
    Publication
    Open Access
    A ridge-based framework for segmentation of 3D electron microscopy datasets
    (Elsevier, 2013-01) Martinez-Sanchez, Antonio; García, Inmaculada; Fernández, Jose-Jesús; Ingeniería de la Información y las Comunicaciones
    Three-dimensional (3D) electron microscopy (EM) has become a major player in structural cell biology as it enables the analysis of subcellular architecture at an unprecedented level of detail. Interpretation of the resulting 3D volumes strongly depends on segmentation, which consists in decomposing the volume into their structural components. The computational approaches proposed so far have not turned out to be of general applicability. Thus, manual segmentation still remains a prevalent method. Here, a new computational framework for segmentation of 3D EM datasets is introduced. It relies on detection and characterization of ridges (i.e. local maxima). The detected ridges are modelled as asymmetric Gaussian functions whose parameters constitute ridge descriptors. This local information is then used to cluster the ridges, which leads to the ultimate segmentation. In this work we focus on membranes and locally planar structures in general. The performance of the framework is illustrated with its application to a number of complex 3D datasets and a quantitative analysis.
  • Loading...
    Thumbnail Image
    Publication
    Embargo
    A sound and complete fuzzy temporal constraint logic
    (Institute of Electrical and Electronics Engineers, 2006-02) Cárdenas Viedma, María Antonia; Ingeniería de la Información y las Comunicaciones
    In this work, we define an extended fuzzy temporal constraint logic (EFTCL) based on possibilistic logic. EFTCL alto,vs us to handle fuzzy ten1poral constraints between temporal variables and, therefore, enables us to express interrelated events through fuzzy temporal constraints. EFTCL is compatible ,vith a theoretical te111poral reasoning model: the fuzzy tem­poral constraint networks (FTCN). The syntax, the se1nantics and the de­duction and refutation theorems for EFTCL are similar to those defined for the sound and noncom¡>lete fuzzy temporal constraint logic (FTCL). In this paper, a resolutio11 principie for performing inferences which take tl1ese constraints into account is proposed tor EFTCL. Moreover, we prove the soundness and the completeness of the refutation by resolution in EFTCL.
  • Loading...
    Thumbnail Image
    Publication
    Open Access
    A time series forecasting based multi-criteria methodology for air quality prediction
    (Elsevier, 2021-09-07) Espinosa Fernández, Raquel; Palma Méndez, José Tomás; Jiménez Barrionuevo, Fernando; Kamińska, Joanna; Sciavicco, Guido; Lucena Sánchez, Estrella; Ingeniería de la Información y las Comunicaciones
    There is a very extensive literature on the design and test of models of environmental pollution, especially in the atmosphere. Current and recent models, however, are focused on explaining the causes and their temporal relationships, but do not explore, in full detail, the performances of pure forecasting models. We consider here three years of data that contain hourly nitrogen oxides concentrations in the air; exposure to high concentrations of these pollutants has been indicated as potential cause of numerous respiratory, circulatory, and even nervous diseases. Nitrogen oxides concentrations are paired with meteorological and vehicle traffic data for each measure. We propose a methodology based on exactness and robustness criteria to compare different pollutant forecasting models and their characteristics. 1DCNN, GRU and LSTM deep learning models, along with Random Forest, Lasso Regression and Support Vector Machines regression models, are analyzed with different window sizes. As a result, our best models offer a 24-hours ahead, very reliable prediction of the concentration of pollutants in the air in the considered area, which can be used to plan, and implement, different kinds of interventions and measures to mitigate the effects on the population.
  • Loading...
    Thumbnail Image
    Publication
    Open Access
    A Transfer Learning Framework for predictive energy-related scenarios in Smart Buildings
    (IEEE Transactions on Industry Applications, 2023-02-01) González Vidal, Aurora; Niu, Shuteng; Song, Houbing; Skarmeta Gómez, Antonio; Mendoza Bernal, José; Ingeniería de la Información y las Comunicaciones; Facultades de la UMU::Facultad de Informática
    Human activities and city routines follow patterns. Transfer learning can help achieve scalable solutions towards the realisation of smart cities accounting for similarities between regions, domains, and activities. In this study, we propose a Transfer Learning-based framework for smart buildings to test this hypothesis in energy-related problems. Our framework has two major components: the network creation and the transferable predictive model. In order to create the network that groups buildings sharing characteristics, we evaluated two strategies: a novel clustering algorithm for mixed data, k-prod, and clustering the image-based representation of time series. Then, a combination of Long Short Term Memory and Convolutional Neural Network was trained on the centroids of the clusters for energy consumption prediction. The Coefficient of Variation of the Root Mean Squared Error (CVRMSE) of the predictions in such clusters vary between 3.85 and 58.85 %. The obtained parameters were transferred to the rest of the buildings for predictive purposes, finding accurate results in buildings with little data. Our framework deals with insufficient training data since parameters from scenarios with more sensors can be received. It also carries out state-of-the-art performance on 3 datasets from different sources having in total 533 rooms/buildings and two energy efficiency domains: consumption prediction reducing the CVRMSE in a 21.6 %, and air conditioning usage prediction moving from a 4.18 % to a 0.28% CVRMSE. Our framework extracts more knowledge from available IoT deployments, so that smartness could be spread between environments at a fewer cost given that less individual effort will be needed.
  • Loading...
    Thumbnail Image
    Publication
    Open Access
    Adapting Knowledge Inference Algorithms to Measure Geometry Competencies through a Puzzle Game
    (ACM, 2023-09-06) Strukova, Sofia; Ruipérez Valiente, José A.; Gómez Mármol, Félix; Ingeniería de la Información y las Comunicaciones
    The rapid technological evolution of the last years has motivated students to develop capabilities that will prepare them for an unknown future in the 21st century. In this context, many teachers intend to optimise the learning process, making it more dynamic and exciting through the introduction of gamification. Thus, this article focuses on a data-driven assessment of geometry competencies, which are essential for developing problem-solving and higher-order thinking skills. Our main goal is to adapt, evaluate and compare Bayesian Knowledge Tracing (BKT), Performance Factor Analysis (PFA), Elo, and Deep Knowledge Tracing (DKT) algorithms applied to the data of a geometry game named Shadowspect, in order to predict students’ performance by means of several classifier metrics. We analysed two algorithmic configurations, with and without prioritisation of Knowledge Components (KCs) – the skills needed to complete a puzzle successfully, and we found Elo to be the algorithm with the best prediction power with the ability to model the real knowledge of students. However, the best results are achieved without KCs because it is a challenging task to differentiate between KCs effectively in game environments. Our results prove that the above-mentioned algorithms can be applied in formal education to improve teaching, learning, and organisational efficiency.
  • Loading...
    Thumbnail Image
    Publication
    Open Access
    Adding a Degree of Certainty to Deductions in a Fuzzy Temporal Constraint Prolog: FTCProlog
    (2024-07-12) Cárdenas-Viedma, María-Antonia; Ingeniería de la Información y las Comunicaciones
    The management of time is essential in most AI-related applications. In addition, we know that temporal information is often not precise. In fact, in most cases, it is necessary to deal with imprecision and/or uncertainty. On the other hand, there is the need to handle the implicit commonsense information present in many temporal statements. In this paper, we present FTCProlog, a logic programming language capable of handling fuzzy temporal constraints soundly and efficiently. The main difference of FTCProlog with respect to its predecessor, PROLogic, is its ability to associate a certainty index with deductions obtained through SLD-resolution. This resolution is based on a proposal within the theoretical logical framework FTCLogic. This model integrates a first-order logic based on possibilistic logic with the Fuzzy Temporal Constraint Networks (FTCNs) that allow efficient time management. The calculation of the certainty index can be useful in applications where one wants to verify the extent to which the times elapsed between certain events follow a given temporal pattern. In this paper, we demonstrate that the calculation of this index respects the properties of the theoretical model regarding its semantics. FTCProlog is implemented in Haskell.
  • Loading...
    Thumbnail Image
    Publication
    Open Access
    An open IoT platform for the management and analysis of energy data
    (Elsevier, 2019-03-01) Fernando Terroso-Sáenz; González Vidal, Aurora; Ramallo González, Alfonso Pablo; Skarmeta Gómez, Antonio; Ingeniería de la Información y las Comunicaciones; Facultades de la UMU::Facultad de Informática
    Buildings are key players when looking at end-use energy demand. It is for this reason that during the last few years, the Internet of Things (IoT) has been considered as a tool that could bring great opportunities for energy reduction via the accurate monitoring and control of a large variety of energy-related agents in buildings. However, there is a lack of IoT platforms specifically oriented towards the proper processing, management and analysis of such large and diverse data. In this context, we put forward in this paper the IoT Energy Platform (IoTEP) which attempts to provide the first holistic solution for the management of IoT energy data. The platform we show here (that has been based on FIWARE) is suitable to include several functionalities and features that are key when dealing with energy quality insurance and support for data analytics. As part of this work, we have tested the platform IoTEP with a real use case that includes data and information from three buildings totalizing hundreds of sensors. The platform has exceed expectations proving robust, plastic and versatile for the application at hand.
  • «
  • 1 (current)
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • »

DSpace software copyright © 2002-2026 LYRASIS

  • Cookie settings
  • Accessibility
  • Send Feedback