The task of anticipating the functions of a known protein poses a substantial challenge within the bioinformatics domain. Protein sequences, protein structures, protein-protein interaction networks, and micro-array data presentations are protein data forms frequently used for function prediction. The proliferation of protein sequence data, obtained from high-throughput techniques during the past few decades, makes them ideal for utilizing deep learning algorithms in protein function prediction. Many advanced techniques of this sort have been advanced thus far. In order to provide a systematic view encompassing the chronological evolution of the techniques within these works, surveying them all is crucial. This survey's comprehensive analysis encompasses the latest methodologies, their associated benefits and drawbacks, along with predictive accuracy, and advocates for a new interpretability direction for protein function prediction models.
In severe instances, cervical cancer can result in a dangerous threat to a woman's life and severely harm the female reproductive system. Optical coherence tomography (OCT) provides non-invasive, real-time, high-resolution imaging capabilities for cervical tissues. Interpreting cervical OCT images is an expertise-dependent and time-consuming operation; consequently, swiftly assembling a substantial quantity of high-quality labeled images is difficult, making it challenging for supervised learning. This research introduces the vision Transformer (ViT) architecture, which has shown remarkable success in natural image analysis, to the task of classifying cervical OCT images. Our research focuses on the development of a self-supervised ViT-based computer-aided diagnosis (CADx) method to efficiently categorize cervical OCT images. To enhance transfer learning in the proposed classification model, we utilize masked autoencoders (MAE) for self-supervised pre-training on cervical OCT images. The ViT-based classification model, during fine-tuning, extracts multi-scale features from varying resolution OCT images, subsequently integrating them with the cross-attention module. Ten-fold cross-validation on an OCT image dataset from a multi-center clinical study in China, with 733 patients, indicated our model's superior performance in classifying high-risk cervical diseases, including HSIL and cervical cancer. The model achieved an AUC value of 0.9963 ± 0.00069, coupled with a sensitivity of 95.89 ± 3.30% and a specificity of 98.23 ± 1.36%. This outperforms comparable Transformer and CNN-based models for the binary classification task. Importantly, our model, using a cross-shaped voting strategy, displayed a sensitivity score of 92.06% and a specificity of 95.56% when validated on an external dataset of 288 three-dimensional (3D) OCT volumes from 118 Chinese patients at a different, new hospital. The findings, using OCT for a year or more, exhibited by four medical experts, were met or exceeded by this result. Our model's ability to identify and visualize local lesions, leveraging the attention map from the standard ViT model, is exceptional. This improved interpretability supports gynecologists in accurate location and diagnosis of possible cervical conditions.
In the global female population, breast cancer is responsible for around 15% of all cancer deaths, and early and precise diagnosis positively influences survival. innate antiviral immunity Machine learning strategies have been widely employed in recent decades to facilitate accurate diagnosis of this disease, yet these strategies often necessitate a substantial dataset for effective training. The utilization of syntactic approaches was limited in this setting, though their efficacy can remain high even with a small training set. This article uses a syntactic technique for classifying masses, determining if they are benign or malignant. A stochastic grammar approach, combined with features from a polygonal representation of mammographic masses, was utilized to discriminate the masses. Other machine learning techniques were compared to the results, revealing the superior performance of grammar-based classifiers in the classification task. The consistent and high accuracy, ranging from 96% to 100%, underscored the effectiveness of grammatical approaches in discerning various instances, even when trained using a small representation of images. More frequent use of syntactic approaches in mass classification is justified, as these methods can effectively identify patterns of benign and malignant masses from a limited image set, ultimately yielding comparable results to current state-of-the-art techniques.
A significant contributor to the global death toll, pneumonia remains a substantial health concern. Chest X-ray images can be analyzed using deep learning to locate pneumonia. Nevertheless, current methodologies fall short in adequately addressing the substantial range of variation and the indistinct borders within the pneumonia region. A Retinanet-based deep learning method for the identification of pneumonia is presented herein. Introducing Res2Net into Retinanet allows us to access the multi-scale features inherent in pneumonia. Employing a novel fusion technique, Fuzzy Non-Maximum Suppression (FNMS), we integrate overlapping detection boxes to generate a more reliable predicted bounding box. Ultimately, performance improvements are observed compared to existing approaches through the integration of two models that utilize diverse architectural structures. The results from the single-model experiment and the model-ensemble experiment are reported. In the single-model paradigm, the RetinaNet network, with the FNMS algorithm and Res2Net backbone, achieves superior results than the standard RetinaNet and other models. When fusing predicted boxes in a model ensemble, the FNMS algorithm outperforms NMS, Soft-NMS, and weighted boxes fusion in achieving a better final score. Experimental validation on the pneumonia detection dataset highlights the superior performance of the FNMS algorithm and the proposed method in the task of identifying pneumonia.
Heart disease early detection is significantly facilitated by the assessment of heart sounds. MYCi975 research buy Despite other methods, manual detection relies on clinicians with deep clinical experience, which inevitably increases the difficulty and uncertainty, particularly in less developed medical settings. A robust neural network design, incorporating an advanced attention module, is proposed in this paper for automating the classification of heart sound waveforms. Noise removal using a Butterworth bandpass filter is the first step in the preprocessing stage, subsequently followed by converting the heart sound recordings into a time-frequency representation using short-time Fourier transform (STFT). The model's operation is dictated by the STFT spectrum. The system automatically extracts features using four down-sampling blocks, each with distinct filter applications. For enhanced feature fusion, an improved attention module is developed, integrating principles from the Squeeze-and-Excitation and coordinate attention modules. Based on the features it has learned, the neural network will ultimately provide a category for the heart sound waves. For the purpose of minimizing model weight and preventing overfitting, the global average pooling layer is implemented; furthermore, to counter the data imbalance problem, focal loss is introduced as the loss function. Validation experiments, employing two publicly available datasets, emphatically illustrated the effectiveness and the advantages associated with our method.
The brain-computer interface (BCI) system implementation necessitates a decoding model, robust and efficient, specifically designed to handle differences in subjects and time periods, which is in high demand. The effectiveness of most electroencephalogram (EEG) decoding models is dictated by the unique features of individual subjects and particular timeframes, demanding pre-application calibration and training using annotated data. However, this scenario will reach an unacceptable level as prolonged data collection by subjects will prove problematic, especially within the rehabilitation frameworks predicated on motor imagery (MI) for disabilities. To remedy this situation, we propose Iterative Self-Training Multi-Subject Domain Adaptation (ISMDA), an unsupervised domain adaptation framework, which zeroes in on the offline Mutual Information (MI) task. The EEG is mapped by the purposefully designed feature extractor onto a latent space that features discriminative representations. Dynamic transfer is implemented within the attention module, fostering a stronger alignment between source and target domain samples and achieving a greater degree of correspondence in the latent space. Subsequently, a domain-specific classifier, operating independently, is used in the initial phase of iterative training to group target-domain samples based on shared characteristics. synthetic biology Finally, a certainty- and confidence-based pseudolabel algorithm is applied in the second iterative training step to accurately calibrate the discrepancy between predicted and empirical probabilities. Extensive testing across three openly available MI datasets, specifically BCI IV IIa, the High Gamma dataset, and Kwon et al.'s dataset, was carried out to evaluate the model's effectiveness. On the three datasets, the proposed method demonstrably outperformed current state-of-the-art offline algorithms in cross-subject classification, achieving accuracies of 6951%, 8238%, and 9098%. Every result indicated that the proposed approach successfully managed the principal obstacles that characterize the offline MI paradigm.
A critical aspect of maternal and fetal healthcare is the assessment of fetal development. Conditions linked to an increased chance of fetal growth restriction (FGR) are substantially more common in low- and middle-income countries. Fetal and maternal health problems are compounded by the barriers to healthcare and social services found in these locations. The problem of unaffordable diagnostic technologies stands as a barrier. An end-to-end algorithm, leveraging a low-cost, hand-held Doppler ultrasound device, is presented in this work to estimate gestational age (GA) and, by extension, fetal growth restriction (FGR).