title
stringlengths 2
287
| abstract
stringlengths 0
5.14k
⌀ | journal
stringlengths 4
184
| date
timestamp[s] | authors
listlengths 1
57
| doi
stringlengths 16
6.63k
⌀ |
|---|---|---|---|---|---|
Biomarkers extracted by fully automated body composition analysis from chest CT correlate with SARS-CoV-2 outcome severity.
|
The complex process of manual biomarker extraction from body composition analysis (BCA) has far restricted the analysis of SARS-CoV-2 outcomes to small patient cohorts and a limited number of tissue types. We investigate the association of two BCA-based biomarkers with the development of severe SARS-CoV-2 infections for 918 patients (354 female, 564 male) regarding disease severity and mortality (186 deceased). Multiple tissues, such as muscle, bone, or adipose tissue are used and acquired with a deep-learning-based, fully-automated BCA from computed tomography images of the chest. The BCA features and markers were univariately analyzed with a Shapiro-Wilk and two-sided Mann-Whitney-U test. In a multivariate approach, obtained markers were adjusted by a defined set of laboratory parameters promoted by other studies. Subsequently, the relationship between the markers and two endpoints, namely severity and mortality, was investigated with regard to statistical significance. The univariate approach showed that the muscle volume was significant for female (p
|
Scientific reports
| 2022-10-01T00:00:00
|
[
"RenéHosch",
"SimoneKattner",
"Marc MoritzBerger",
"ThorstenBrenner",
"JohannesHaubold",
"JensKleesiek",
"SvenKoitka",
"LennardKroll",
"AnisaKureishi",
"NilsFlaschel",
"FelixNensa"
] |
10.1038/s41598-022-20419-w
10.1016/j.dsx.2020.06.060
10.2196/26075
10.1038/s41586-020-2521-4
10.1016/S2213-8587(21)00089-9
10.1016/j.clnesp.2020.09.018
10.1186/s12911-021-01576-w
10.1016/j.imu.2021.100564
10.1016/j.isci.2021.103523
10.1016/j.metabol.2020.154378
10.1093/bja/aev541
10.1007/s00261-020-02693-2
10.1186/s12933-021-01327-1
10.1002/oby.22971
10.1080/17512433.2017.1347503
10.1016/j.ejca.2015.12.030
10.1002/jcsm.12379
10.1097/RTI.0000000000000428
10.1002/jcsm.12573
10.1007/s00330-020-07147-3
10.1016/j.ejrad.2021.110031
10.1038/s41598-021-00161-5
10.1111/nyas.12842
10.3390/jcm10020356
10.1038/oby.2008.575
10.1038/ncpcardio0319
10.1016/j.numecd.2013.11.010
10.3389/fphys.2021.651167
10.1007/s11906-019-0939-6
10.1093/ehjci/jeu006
10.1016/j.jcct.2017.11.007
10.1016/j.amjcard.2016.01.033
10.5812/ijem.3505
10.1038/s41592-019-0686-2
10.1016/j.metabol.2020.154436
10.1148/radiol.2021204141
|
AI in Health Science: A Perspective.
|
By helping practitioners understand complicated and varied types of data, Artificial Intelligence (AI) has influenced medical practice deeply. It is the use of a computer to mimic intelligent behaviour. Many medical professions, particularly those reliant on imaging or surgery, are progressively developing AI. While AI cognitive component outperforms human intellect, it lacks awareness, emotions, intuition, and adaptability. With minimum human participation, AI is quickly growing in healthcare, and numerous AI applications have been created to address current issues. This article explains AI, its various elements and how to utilize them in healthcare. It also offers practical suggestions for developing an AI strategy to assist the digital healthcare transition.
|
Current pharmaceutical biotechnology
| 2022-10-01T00:00:00
|
[
"RaghavMishra",
"KajalChaudhary",
"IshaMishra"
] |
10.2174/1389201023666220929145220
|
COVID-19 Semantic Pneumonia Segmentation and Classification Using Artificial Intelligence.
|
Coronavirus 2019 (COVID-19) has become a pandemic. The seriousness of COVID-19 can be realized from the number of victims worldwide and large number of deaths. This paper presents an efficient deep semantic segmentation network (DeepLabv3Plus). Initially, the dynamic adaptive histogram equalization is utilized to enhance the images. Data augmentation techniques are then used to augment the enhanced images. The second stage builds a custom convolutional neural network model using several pretrained ImageNet models and compares them to repeatedly trim the best-performing models to reduce complexity and improve memory efficiency. Several experiments were done using different techniques and parameters. Furthermore, the proposed model achieved an average accuracy of 99.6% and an area under the curve of 0.996 in the COVID-19 detection. This paper will discuss how to train a customized smart convolutional neural network using various parameters on a set of chest X-rays with an accuracy of 99.6%.
|
Contrast media & molecular imaging
| 2022-10-01T00:00:00
|
[
"Mohammed JAbdulaal",
"Ibrahim MMehedi",
"Abdullah MAbusorrah",
"Abdulah JezaAljohani",
"Ahmad HMilyani",
"Md MasudRana",
"MohamedMahmoud"
] |
10.1155/2022/5297709
10.1109/tnnls.2020.2995800
10.1109/tnnls.2017.2766168
10.1016/j.media.2020.101836
10.1109/access.2020.3016780
10.1016/j.mehy.2020.109761
10.1016/j.imu.2020.100360
10.1016/j.knosys.2020.106270
10.1016/j.asoc.2020.106580
10.1109/ICNN.1993.298572
10.1109/tmi.2016.2528162
10.1007/s13246-020-00888-x
10.1016/j.compbiomed.2020.103792
10.1001/jama.2020.3786
10.1007/978-981-16-7618-5_3
10.1016/j.eswa.2020.114054
10.7717/peerj-cs.358
10.1109/tgrs.2022.3155765
10.1504/ijhm.2021.114174
10.1109/access.2020.2971257
10.1038/s41598-020-76550-z
10.1101/2020.03.26.20044610
10.1016/j.cmpb.2020.105532
10.3389/fpubh.2020.00437
10.1109/access.2020.3003810
10.1016/j.patrec.2020.04.016
10.1109/access.2019.2941937
10.1504/ijhm.2021.114173
10.1109/access.2021.3120717
10.1504/ijhm.2021.120616
10.1109/access.2021.3101142
|
A novel multimodal fusion framework for early diagnosis and accurate classification of COVID-19 patients using X-ray images and speech signal processing techniques.
|
COVID-19 outbreak has become one of the most challenging problems for human being. It is a communicable disease caused by a new coronavirus strain, which infected over 375 million people already and caused almost 6 million deaths. This paper aims to develop and design a framework for early diagnosis and fast classification of COVID-19 symptoms using multimodal Deep Learning techniques.
we collected chest X-ray and cough sample data from open source datasets, Cohen and datasets and local hospitals. The features are extracted from the chest X-ray images are extracted from chest X-ray datasets. We also used cough audio datasets from Coswara project and local hospitals. The publicly available Coughvid DetectNow and Virufy datasets are used to evaluate COVID-19 detection based on speech sounds, respiratory, and cough. The collected audio data comprises slow and fast breathing, shallow and deep coughing, spoken digits, and phonation of sustained vowels. Gender, geographical location, age, preexisting medical conditions, and current health status (COVID-19 and Non-COVID-19) are recorded.
The proposed framework uses the selection algorithm of the pre-trained network to determine the best fusion model characterized by the pre-trained chest X-ray and cough models. Third, deep chest X-ray fusion by discriminant correlation analysis is used to fuse discriminatory features from the two models. The proposed framework achieved recognition accuracy, specificity, and sensitivity of 98.91%, 96.25%, and 97.69%, respectively. With the fusion method we obtained 94.99% accuracy.
This paper examines the effectiveness of well-known ML architectures on a joint collection of chest-X-rays and cough samples for early classification of COVID-19. It shows that existing methods can effectively used for diagnosis and suggesting that the fusion learning paradigm could be a crucial asset in diagnosing future unknown illnesses. The proposed framework supports health informatics basis on early diagnosis, clinical decision support, and accurate prediction.
|
Computer methods and programs in biomedicine
| 2022-09-30T00:00:00
|
[
"SantoshKumar",
"Mithilesh KumarChaube",
"Saeed HamoodAlsamhi",
"Sachin KumarGupta",
"MohsenGuizani",
"RaffaeleGravina",
"GiancarloFortino"
] |
10.1016/j.cmpb.2022.107109
10.1109/TDSC.2022.3144657
|
A Deep Learning based Solution (Covi-DeteCT) Amidst COVID-19.
|
The whole world has been severely affected due to the COVID-19 pandemic. The rapid and large-scale spread has caused immense pressure on the medical sector hence increasing the chances of false detection due to human errors and mishandling of reports. At the time of outbreaks of COVID-19, there is a crucial shortage of test kits as well. Quick diagnostic testing has become one of the main challenges. For the detection of COVID-19, many Artificial Intelligence based methodologies have been proposed, a few had suggested integration of the model on a public usable platform, but none had executed this on a working application as per our knowledge.
Keeping the above comprehension in mind, the objective is to provide an easy-to-use platform for COVID-19 identification. This work would be a contribution to the digitization of health facilities. This work is a fusion of deep learning classifiers and medical images to provide a speedy and accurate identification of the COVID-19 virus by analyzing the user's CT scan images of the lungs. It will assist healthcare workers in reducing their workload and decreasing the possibility of false detection.
In this work, various models like Resnet50V2 and Resnet101V2, an adjusted rendition of ResNet101V2 with Feature Pyramid Network, have been applied for classifying the CT scan images into the categories: normal or COVID-19 positive.
A detailed analysis of all three models' performances have been done on the SARS-CoV-2 dataset with various metrics like precision, recall, F1-score, ROC curve, etc. It was found that Resnet50V2 achieves an accuracy of 96.79%, whereas Resnet101V2 achieves an accuracy of 97.79%. An accuracy of 98.19% has been obtained by ResNet101V2 with Feature Pyramid Network. As Res- Net101V2 with Feature Pyramid Network is showing better results, thus, it is further incorporated into a working application that takes CT images as input from the user and feeds into the trained model and detects the presence of COVID-19 infection.
A mobile application integrated with the deeper variant of ResNet, i.e., ResNet101V2 with FPN checks the presence of COVID-19 in a faster and accurate manner. People can use this application on their smart mobile devices. This automated system would assist healthcare workers as well, which ultimately reduces their workload and decreases the possibility of false detection.
|
Current medical imaging
| 2022-09-30T00:00:00
|
[
"KavitaPandey"
] |
10.2174/1573405618666220928145344
|
IEViT: An enhanced vision transformer architecture for chest X-ray image classification.
|
Chest X-ray imaging is a relatively cheap and accessible diagnostic tool that can assist in the diagnosis of various conditions, including pneumonia, tuberculosis, COVID-19, and others. However, the requirement for expert radiologists to view and interpret chest X-ray images can be a bottleneck, especially in remote and deprived areas. Recent advances in machine learning have made possible the automated diagnosis of chest X-ray scans. In this work, we examine the use of a novel Transformer-based deep learning model for the task of chest X-ray image classification.
We first examine the performance of the Vision Transformer (ViT) state-of-the-art image classification machine learning model for the task of chest X-ray image classification, and then propose and evaluate the Input Enhanced Vision Transformer (IEViT), a novel enhanced Vision Transformer model that can achieve improved performance on chest X-ray images associated with various pathologies.
Experiments on four chest X-ray image data sets containing various pathologies (tuberculosis, pneumonia, COVID-19) demonstrated that the proposed IEViT model outperformed ViT for all the data sets and variants examined, achieving an F1-score between 96.39% and 100%, and an improvement over ViT of up to +5.82% in terms of F1-score across the four examined data sets. IEViT's maximum sensitivity (recall) ranged between 93.50% and 100% across the four data sets, with an improvement over ViT of up to +3%, whereas IEViT's maximum precision ranged between 97.96% and 100% across the four data sets, with an improvement over ViT of up to +6.41%.
Results showed that the proposed IEViT model outperformed all ViT's variants for all the examined chest X-ray image data sets, demonstrating its superiority and generalisation ability. Given the relatively low cost and the widespread accessibility of chest X-ray imaging, the use of the proposed IEViT model can potentially offer a powerful, but relatively cheap and accessible method for assisting diagnosis using chest X-ray images.
|
Computer methods and programs in biomedicine
| 2022-09-27T00:00:00
|
[
"Gabriel IluebeOkolo",
"StamosKatsigiannis",
"NaeemRamzan"
] |
10.1016/j.cmpb.2022.107141
|
E-GCS: Detection of COVID-19 through classification by attention bottleneck residual network.
|
Recently, the coronavirus disease 2019 (COVID-19) has caused mortality of many people globally. Thus, there existed a need to detect this disease to prevent its further spread. Hence, the study aims to predict COVID-19 infected patients based on deep learning (DL) and image processing.
The study intends to classify the normal and abnormal cases of COVID-19 by considering three different medical imaging modalities namely ultrasound imaging, X-ray images and CT scan images through introduced attention bottleneck residual network (AB-ResNet). It also aims to segment the abnormal infected area from normal images for localizing localising the disease infected area through the proposed edge based graph cut segmentation (E-GCS).
AB-ResNet is used for classifying images whereas E-GCS segment the abnormal images. The study possess various advantages as it rely on DL and possess capability for accelerating the training speed of deep networks. It also enhance the network depth leading to minimum parameters, minimising the impact of vanishing gradient issue and attaining effective network performance with respect to better accuracy.
Performance and comparative analysis is undertaken to evaluate the efficiency of the introduced system and results explores the efficiency of the proposed system in COVID-19 detection with high accuracy (99%).
|
Engineering applications of artificial intelligence
| 2022-09-27T00:00:00
|
[
"TAhila",
"A CSubhajini"
] |
10.1016/j.engappai.2022.105398
|
Application of Deep Learning Techniques in Diagnosis of Covid-19 (Coronavirus): A Systematic Review.
|
Covid-19 is now one of the most incredibly intense and severe illnesses of the twentieth century. Covid-19 has already endangered the lives of millions of people worldwide due to its acute pulmonary effects. Image-based diagnostic techniques like X-ray, CT, and ultrasound are commonly employed to get a quick and reliable clinical condition. Covid-19 identification out of such clinical scans is exceedingly time-consuming, labor-intensive, and susceptible to silly intervention. As a result, radiography imaging approaches using Deep Learning (DL) are consistently employed to achieve great results. Various artificial intelligence-based systems have been developed for the early prediction of coronavirus using radiography pictures. Specific DL methods such as CNN and RNN noticeably extract extremely critical characteristics, primarily in diagnostic imaging. Recent coronavirus studies have used these techniques to utilize radiography image scans significantly. The disease, as well as the present pandemic, was studied using public and private data. A total of 64 pre-trained and custom DL models concerning imaging modality as taxonomies are selected from the studied articles. The constraints relevant to DL-based techniques are the sample selection, network architecture, training with minimal annotated database, and security issues. This includes evaluating causal agents, pathophysiology, immunological reactions, and epidemiological illness. DL-based Covid-19 detection systems are the key focus of this review article. Covid-19 work is intended to be accelerated as a result of this study.
|
Neural processing letters
| 2022-09-27T00:00:00
|
[
"Yogesh HBhosale",
"K SridharPatnaik"
] |
10.1007/s11063-022-11023-0
10.1101/2020.02.25.20021568
10.1016/j.scs.2020.102571
10.1109/ACCESS.2021.3058066
10.1002/jmv.26709
10.1109/ACCESS.2020.3003810
10.33889/IJMEMS.2020.5.4.052
10.1080/14737159.2021.1962708
10.1016/j.matpr.2020.06.245
10.1016/j.jiph.2020.03.019
10.1080/14737159.2020.1757437
10.1109/ACCESS.2021.3054484
10.1080/14737159.2020.1816466
10.1109/JBHI.2020.3030224
10.1007/s12559-020-09787-5
10.1080/07391102.2020.1767212
10.1016/j.scs.2021.102777
10.1007/s10489-020-01831-z
10.1109/RBME.2020.2990959
10.1007/s00521-020-05437-x
10.1097/RLI.0000000000000748
10.1148/radiol.2020200905
10.1109/ACCESS.2021.3058537
10.1016/j.csbj.2021.02.016
10.1016/j.chaos.2020.110120
10.2196/19569
10.1007/s00500-020-05424-3
10.1038/s41746-021-00399-3
10.1016/j.ijleo.2021.166405
10.1109/TII.2021.3057683
10.1016/j.chaos.2020.110245
10.1109/JBHI.2020.3037127
10.1016/j.asoc.2021.107184
10.1016/j.irbm.2021.01.004
10.1016/j.ejrad.2020.109041
10.1016/j.aej.2021.01.011
10.1016/j.asoc.2020.106859
10.1016/j.chaos.2020.110190
10.1016/j.asoc.2020.106744
10.1016/j.asoc.2020.106885
10.1007/s10489-020-01902-1
10.1007/s13246-020-00865-4
10.1038/s41598-020-76550-z
10.7937/tcia.2020.6c7y-gq39
10.7910/DVN/6ACUZJ
10.1016/j.ejrad.2020.109402
10.21037/atm.2020.03.132
10.1016/j.chaos.2020.110495
10.1016/j.imu.2020.100505
10.1016/j.eswa.2020.114054
10.1016/j.asoc.2021.107160
10.1109/ACCESS.2020.3016780
10.1016/j.mehy.2020.109761
10.1016/j.imu.2020.100427
10.1007/s10140-020-01886-y
10.1016/j.media.2020.101913
10.1016/j.iot.2021.100377
10.1002/ima.22706,32,2,(419-434)
10.1007/s00264-020-04609-7
10.1016/j.bspc.2021.102490
10.1016/j.knosys.2020.106647
10.1016/j.ibmed.2020.100013
10.1016/j.imu.2020.100506
10.1016/j.scs.2020.102589
10.3390/app11020672
10.1016/j.advms.2020.06.005
10.1109/TMI.2020.2994459
10.1016/j.media.2021.101993
10.1016/j.compeleceng.2020.106960
10.1109/ACCESS.2020.3005510
10.1109/ACCESS.2020.2994762
10.3348/kjr.2020.0536
10.3390/electronics9091439
10.1007/s42600-021-00132-9
10.1186/s40537-020-00392-9
10.1016/j.compbiomed.2020.103792
|
Analysis of the Causes of Solitary Pulmonary Nodule Misdiagnosed as Lung Cancer by Using Artificial Intelligence: A Retrospective Study at a Single Center.
|
Artificial intelligence (AI) adopting deep learning technology has been widely used in the med-ical imaging domain in recent years. It realized the automatic judgment of benign and malig-nant solitary pulmonary nodules (SPNs) and even replaced the work of doctors to some extent. However, misdiagnoses can occur in certain cases. Only by determining the causes can AI play a larger role. A total of 21 Coronavirus disease 2019 (COVID-19) patients were diagnosed with SPN by CT imaging. Their Clinical data, including general condition, imaging features, AI re-ports, and outcomes were included in this retrospective study. Although they were confirmed COVID-19 by testing reverse transcription-polymerase chain reaction (RT-PCR) with severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), their CT imaging data were misjudged by AI to be high-risk nodules for lung cancer. Imaging characteristics included burr sign (76.2%), lobulated sign (61.9%), pleural indentation (42.9%), smooth edges (23.8%), and cavity (14.3%). The accuracy of AI was different from that of radiologists in judging the nature of be-nign SPNs (p < 0.001, κ = 0.036 < 0.4, means the two diagnosis methods poor fit). COVID-19 patients with SPN might have been misdiagnosed using the AI system, suggesting that the AI system needs to be further optimized, especially in the event of a new disease outbreak.
|
Diagnostics (Basel, Switzerland)
| 2022-09-24T00:00:00
|
[
"Xiong-YingWu",
"FanDing",
"KunLi",
"Wen-CaiHuang",
"YongZhang",
"JianZhu"
] |
10.3390/diagnostics12092218
10.1016/j.chest.2017.01.018
10.1148/radiol.2017161659
10.1109/TMI.2016.2629462
10.1016/j.media.2017.06.015
10.1002/mp.12846
10.1109/TBME.2016.2613502
10.1038/srep24454
10.1016/j.cell.2020.04.045
10.3390/cancers12082211
10.1056/NEJMoa2001316
10.1155/2017/4067832
10.1038/srep46479
10.1371/journal.pone.0248957
10.1016/j.jtho.2020.04.030
10.1016/j.jinf.2020.03.033
10.1007/s00330-020-07042-x
10.1097/CM9.0000000000000634
10.3390/cancers11111673
10.1111/1759-7714.13185
10.1016/j.cell.2018.02.010
10.1136/thoraxjnl-2019-214104
10.1038/s41586-020-2012-7
10.1016/j.compbiomed.2012.12.004
10.1038/nbt.4233
10.1136/bmj.m606
|
Segmentation-Based Classification Deep Learning Model Embedded with Explainable AI for COVID-19 Detection in Chest X-ray Scans.
|
Background and Motivation: COVID-19 has resulted in a massive loss of life during the last two years. The current imaging-based diagnostic methods for COVID-19 detection in multiclass pneumonia-type chest X-rays are not so successful in clinical practice due to high error rates. Our hypothesis states that if we can have a segmentation-based classification error rate <5%, typically adopted for 510 (K) regulatory purposes, the diagnostic system can be adapted in clinical settings. Method: This study proposes 16 types of segmentation-based classification deep learning-based systems for automatic, rapid, and precise detection of COVID-19. The two deep learning-based segmentation networks, namely UNet and UNet+, along with eight classification models, namely VGG16, VGG19, Xception, InceptionV3, Densenet201, NASNetMobile, Resnet50, and MobileNet, were applied to select the best-suited combination of networks. Using the cross-entropy loss function, the system performance was evaluated by Dice, Jaccard, area-under-the-curve (AUC), and receiver operating characteristics (ROC) and validated using Grad-CAM in explainable AI framework. Results: The best performing segmentation model was UNet, which exhibited the accuracy, loss, Dice, Jaccard, and AUC of 96.35%, 0.15%, 94.88%, 90.38%, and 0.99 (p-value <0.0001), respectively. The best performing segmentation-based classification model was UNet+Xception, which exhibited the accuracy, precision, recall, F1-score, and AUC of 97.45%, 97.46%, 97.45%, 97.43%, and 0.998 (p-value <0.0001), respectively. Our system outperformed existing methods for segmentation-based classification models. The mean improvement of the UNet+Xception system over all the remaining studies was 8.27%. Conclusion: The segmentation-based classification is a viable option as the hypothesis (error rate <5%) holds true and is thus adaptable in clinical practice.
|
Diagnostics (Basel, Switzerland)
| 2022-09-24T00:00:00
|
[
"NoneNillmani",
"NeerajSharma",
"LucaSaba",
"Narendra NKhanna",
"Mannudeep KKalra",
"Mostafa MFouda",
"Jasjit SSuri"
] |
10.3390/diagnostics12092132
10.1371/journal.pone.0249788
10.1016/j.compbiomed.2020.103960
10.1186/s13244-022-01176-w
10.1007/s10554-020-02089-9
10.1001/jama.2020.3786
10.1016/j.acra.2015.12.010
10.1148/radiol.2015150425
10.1118/1.2836950
10.1016/j.ejrad.2019.02.038
10.1038/s42256-020-0186-1
10.1016/j.zemedi.2018.11.002
10.1016/j.compbiomed.2018.05.014
10.3390/cancers11010111
10.1007/s10916-021-01707-w
10.1007/s11548-021-02317-0
10.1016/j.cmpb.2020.105581
10.1016/j.chaos.2020.110495
10.1007/s10489-020-01902-1
10.1016/j.bspc.2020.102365
10.1148/radiol.2020203511
10.3390/diagnostics12061482
10.1016/j.cmpb.2017.07.011
10.3390/biomedicines9070720
10.1016/j.compbiomed.2016.06.010
10.1016/j.compbiomed.2020.103847
10.1016/j.compbiomed.2021.104721
10.23736/S0392-9590.21.04771-4
10.3390/diagnostics12051283
10.3390/diagnostics11112109
10.1016/j.compbiomed.2020.103958
10.1109/ACCESS.2021.3086020
10.2352/J.ImagingSci.Technol.2020.64.2.020508
10.1109/TIM.2022.3174270
10.1007/s11222-009-9153-8
10.1109/TKDE.2019.2912815
10.1016/j.neucom.2018.05.011
10.1109/JBHI.2022.3177854
10.1109/ACCESS.2020.3031384
10.1109/ACCESS.2020.3010287
10.1016/j.compbiomed.2021.104319
10.17632/rscbjbr9sj.2
10.1016/j.cell.2018.02.010
10.1016/j.compbiomed.2022.105571
10.1016/j.compbiomed.2017.10.022
10.1109/ACCESS.2019.2962617
10.1007/s13755-021-00166-4
10.3390/diagnostics12030652
10.1016/j.eswa.2021.116288
10.1007/s11277-018-5777-3
10.1007/s11277-018-5702-9
10.1186/s12880-020-00514-y
10.1109/ACCESS.2020.3017915
10.3390/s21217116
10.1016/j.cmpb.2019.06.005
10.1038/s41598-021-99015-3
10.1016/j.asoc.2020.106912
10.3390/s22031211
10.1109/TMI.2020.2993291
10.1007/s00330-021-08050-1
10.1016/j.bspc.2021.103182
10.1016/j.bea.2022.100041
10.1016/j.compbiomed.2022.105244
10.1016/j.neucom.2021.03.034
10.1007/s10916-016-0504-7
10.1016/j.ejrad.2022.110164
10.1142/S0219467801000402
10.1016/j.mri.2012.04.021
10.1007/s10554-020-02124-9
10.3390/sym14071310
|
Identification of micro- and nanoplastics released from medical masks using hyperspectral imaging and deep learning.
|
Apart from other severe consequences, the COVID-19 pandemic has inflicted a surge in personal protective equipment usage, some of which, such as medical masks, have a short effective protection time. Their misdisposition and subsequent natural degradation make them huge sources of micro- and nanoplastic particles. To better understand the consequences of the direct influence of microplastic pollution on biota, there is an urgent need to develop a reliable and high-throughput analytical tool for sub-micrometre plastic identification and visualisation in environmental and biological samples. This study evaluated the application of a combined technique based on dark-field enhanced microscopy and hyperspectral imaging augmented with deep learning data analysis for the visualisation, detection and identification of microplastic particles released from commercially available medical masks after 192 hours of UV-C irradiation. The analysis was performed using a separated blue-coloured spunbond outer layer and white-coloured meltblown interlayer that allowed us to assess the influence of the structure and pigmentation of intact and UV-exposed samples on classification performance. Microscopy revealed strong fragmentation of both layers and the formation of microparticles and fibres of various shapes after UV exposure. Based on the spectral signatures of both layers, it was possible to identify intact materials using a convolutional neural network successfully. However, the further classification of UV-exposed samples demonstrated that the spectral characteristics of samples in the visible to near-infrared range are disrupted, causing a decreased performance of the CNN. Despite this, the application of a deep learning algorithm in hyperspectral analysis outperformed the conventional spectral angle mapper technique in classifying both intact and UV-exposed samples, confirming the potential of the proposed approach in secondary microplastic analysis.
|
The Analyst
| 2022-09-21T00:00:00
|
[
"IlnurIshmukhametov",
"SvetlanaBatasheva",
"RawilFakhrullin"
] |
10.1039/d2an01139e
|
Automated Lung Segmentation from Computed Tomography Images of Normal and COVID-19 Pneumonia Patients.
|
Automated image segmentation is an essential step in quantitative image analysis. This study assesses the performance of a deep learning-based model for lung segmentation from computed tomography (CT) images of normal and COVID-19 patients.
A descriptive-analytical study was conducted from December 2020 to April 2021 on the CT images of patients from various educational hospitals affiliated with Mashhad University of Medical Sciences (Mashhad, Iran). Of the selected images and corresponding lung masks of 1,200 confirmed COVID-19 patients, 1,080 were used to train a residual neural network. The performance of the residual network (ResNet) model was evaluated on two distinct external test datasets, namely the remaining 120 COVID-19 and 120 normal patients. Different evaluation metrics such as Dice similarity coefficient (DSC), mean absolute error (MAE), relative mean Hounsfield unit (HU) difference, and relative volume difference were calculated to assess the accuracy of the predicted lung masks. The Mann-Whitney U test was used to assess the difference between the corresponding values in the normal and COVID-19 patients. P<0.05 was considered statistically significant.
The ResNet model achieved a DSC of 0.980 and 0.971 and a relative mean HU difference of -2.679% and -4.403% for the normal and COVID-19 patients, respectively. Comparable performance in lung segmentation of normal and COVID-19 patients indicated the model's accuracy for identifying lung tissue in the presence of COVID-19-associated infections. Although a slightly better performance was observed in normal patients.
The ResNet model provides an accurate and reliable automated lung segmentation of COVID-19 infected lung tissue.A preprint version of this article was published on arXiv before formal peer review (https://arxiv.org/abs/2104.02042).
|
Iranian journal of medical sciences
| 2022-09-20T00:00:00
|
[
"FaezeGholamiankhah",
"SamanehMostafapour",
"NouraddinAbdi Goushbolagh",
"SeyedjafarShojaerazavi",
"ParvanehLayegh",
"Seyyed MohammadTabatabaei",
"HosseinArabi"
] |
10.30476/IJMS.2022.90791.2178
10.30476/ijms.2020.85869.1549
10.30476/ijms.2020.87233.1730
10.1109/TMI.2020.3001810
10.1109/TBDATA.2021.3056564
10.1109/TMI.2020.2996645
10.1016/j.media.2020.101794
10.1109/ICBME.2010.5704968
10.1016/j.radphyschem.2021.109666
10.1148/radiol.2020200642
10.1101/2020.03.12.20027185
10.1016/j.cell.2020.04.045
10.1148/radiol.2020202439
10.1186/s41747-020-00173-2
10.1016/j.media.2016.11.003
10.1038/s41598-020-80936-4
10.1016/j.procs.2018.01.104
10.1186/s41824-020-00086-8
10.1002/ima.22527
10.1002/mp.14418
10.1016/j.media.2020.101759
10.1016/j.media.2016.02.002
10.1109/TMI.2021.3066161
10.1109/TMI.2020.2995965
10.1016/j.patcog.2021.108109
10.1016/j.patcog.2020.107747
10.1515/cdbme-2016-0114
10.1016/j.cmpb.2018.01.025
10.1016/j.media.2020.101718
10.1016/j.chest.2020.04.003
10.1148/radiol.2020200432
10.1002/mp.14676
10.1109/RBME.2020.2987975
10.3390/sym12040651
10.3892/etm.2020.9210
10.1016/j.imu.2021.100681
10.3390/diagnostics11081405
10.1016/j.patcog.2021.108071
10.1136/neurintsurg-2015-011697
|
Detection of COVID-19 Infection in CT and X-ray images using transfer learning approach.
|
The infection caused by the SARS-CoV-2 (COVID-19) pandemic is a threat to human lives. An early and accurate diagnosis is necessary for treatment.
The study presents an efficient classification methodology for precise identification of infection caused by COVID-19 using CT and X-ray images.
The depthwise separable convolution-based model of MobileNet V2 was exploited for feature extraction. The features of infection were supplied to the SVM classifier for training which produced accurate classification results.
The accuracies for CT and X-ray images are 99.42% and 98.54% respectively. The MCC score was used to avoid any mislead caused by accuracy and F1 score as it is more mathematically balanced metric. The MCC scores obtained for CT and X-ray were 0.9852 and 0.9657, respectively. The Youden's index showed a significant improvement of more than 2% for both imaging techniques.
The proposed transfer learning-based approach obtained the best results for all evaluation metrics and produced reliable results for the accurate identification of COVID-19 symptoms. This study can help in reducing the time in diagnosis of the infection.
|
Technology and health care : official journal of the European Society for Engineering and Medicine
| 2022-09-13T00:00:00
|
[
"AlokTiwari",
"SumitTripathi",
"Dinesh ChandraPandey",
"NeerajSharma",
"ShiruSharma"
] |
10.3233/THC-220114
|
A Novel Method for COVID-19 Detection Based on DCNNs and Hierarchical Structure.
|
The worldwide outbreak of the new coronavirus disease (COVID-19) has been declared a pandemic by the World Health Organization (WHO). It has a devastating impact on daily life, public health, and global economy. Due to the highly infectiousness, it is urgent to early screening of suspected cases quickly and accurately. Chest X-ray medical image, as a diagnostic basis for COVID-19, arouses attention from medical engineering. However, due to small lesion difference and lack of training data, the accuracy of detection model is insufficient. In this work, a transfer learning strategy is introduced to hierarchical structure to enhance high-level features of deep convolutional neural networks. The proposed framework consisting of asymmetric pretrained DCNNs with attention networks integrates various information into a wider architecture to learn more discriminative and complementary features. Furthermore, a novel cross-entropy loss function with a penalty term weakens misclassification. Extensive experiments are implemented on the COVID-19 dataset. Compared with the state-of-the-arts, the effectiveness and high performance of the proposed method are demonstrated.
|
Computational and mathematical methods in medicine
| 2022-09-13T00:00:00
|
[
"YuqinLi",
"KeZhang",
"WeiliShi",
"ZhengangJiang"
] |
10.1155/2022/2484435
10.1016/j.bbe.2020.08.008
10.1016/j.irbm.2020.05.003
10.1007/s10044-021-00984-y
10.7717/peerj-cs.313
10.1080/07391102.2020.1788642
10.1016/j.compbiomed.2020.103792
10.1101/2020.05.12.20099937
10.1117/12.2581496
10.1155/2022/6185013
10.1016/j.mehy.2020.109761
10.1016/j.patrec.2021.11.020
10.1016/B978-0-12-824536-1.00003-4
10.1007/s10489-020-01900-3
10.1007/s10489-020-01826-w
10.1007/s11517-020-02299-2
10.1016/j.patrec.2020.09.010
10.20944/preprints202005.0151.v1
10.1016/j.bspc.2022.103595
10.1016/j.compbiomed.2021.105134
10.1016/j.bspc.2019.04.031
10.1109/cvpr.2016.90
10.1109/cvpr.2018.00745
10.1109/ACCESS.2020.3010287
10.1186/s40537-019-0197-0
10.1049/ipr2.12090
10.1007/s40846-020-00529-4
10.3390/ijerph18063056
10.3390/healthcare9050522
10.1016/j.irbm.2020.07.001
|
Rapid quantification of COVID-19 pneumonia burden from computed tomography with convolutional long short-term memory networks.
|
Journal of medical imaging (Bellingham, Wash.)
| 2022-09-13T00:00:00
|
[
"AdityaKillekar",
"KajetanGrodecki",
"AndrewLin",
"SebastienCadet",
"PriscillaMcElhinney",
"AryabodRazipour",
"CatoChan",
"Barry DPressman",
"PeterJulien",
"PeterChen",
"JuditSimon",
"PalMaurovich-Horvat",
"NicolaGaibazzi",
"UditThakur",
"ElisabettaMancini",
"CeciliaAgalbato",
"JiroMunechika",
"HidenariMatsumoto",
"RobertoMenè",
"GianfrancoParati",
"FrancoCernigliaro",
"NiteshNerlekar",
"CamillaTorlasco",
"GianlucaPontone",
"DaminiDey",
"PiotrSlomka"
] |
10.1117/1.JMI.9.5.054001
10.1016/j.ajic.2020.07.011
10.1007/s00330-020-07033-y
10.1038/s41598-020-80061-2
10.1148/rg.2020200159
10.1148/radiol.2020200463
10.1148/radiol.2020200843
10.1148/radiol.2020200370
10.1007/s00330-020-06817-6
10.1148/ryct.2020200047
10.1148/ryai.2020200048
10.1148/ryct.2020200441
10.1148/ryct.2020200389
10.1001/archinternmed.2009.440
10.2967/jnumed.120.246256
10.1097/RLU.0000000000003135
10.1016/j.diii.2020.05.011
10.1007/s00259-020-05014-3
10.4103/ijri.IJRI_479_20
10.1016/j.cell.2020.04.045
10.1148/radiol.2020201491
10.1109/TMI.2020.2996645
10.1016/j.media.2020.101836
10.1007/978-3-319-46723-8_49
10.1109/3DV.2016.79
10.1162/neco.1997.9.8.1735
10.1148/radiol.10091808
10.1109/CVPR.2017.243
10.1109/IVS.2019.8813852
10.1109/CVPR.2017.19
10.1109/CVPR.2009.5206848
10.1007/BF02295996
10.1148/radiol.2020200642
10.1148/radiol.2020200905
10.1016/j.metabol.2020.154436
10.1109/TMI.2020.3000314
10.1186/s12880-020-00529-5
10.1155/2020/4706576
10.1007/s00521-020-05514-1
10.1002/int.22586
10.1117/12.2613272
|
|
Lung image segmentation based on DRD U-Net and combined WGAN with Deep Neural Network.
|
COVID-19 is a hot issue right now, and it's causing a huge number of infections in people, posing a grave threat to human life. Deep learning-based image diagnostic technology can effectively enhance the deficiencies of the current main detection method. This paper proposes a multi-classification model diagnosis based on segmentation and classification multi-task.
In the segmentation task, the end-to-end DRD U-Net model is used to segment the lung lesions to improve the ability of feature reuse and target segmentation. In the classification task, the model combined with WGAN and Deep Neural Network classifier is used to effectively solve the problem of multi-classification of COVID-19 images with small samples, to achieve the goal of effectively distinguishing COVID-19 patients, other pneumonia patients, and normal subjects.
Experiments are carried out on common X-ray image and CT image data sets. The results display that in the segmentation task, the model is optimal in the key indicators of DSC and HD, and the error is increased by 0.33% and reduced by 3.57 mm compared with the original network U-Net. In the classification task, compared with SMOTE oversampling method, accuracy increased from 65.32% to 73.84%, F-measure increased from 67.65% to 74.65%, G-mean increased from 66.52% to 74.37%. At the same time, compared with other classical multi-task models, the results also have some advantages.
This study provides new possibilities for COVID-19 image diagnosis methods, improves the accuracy of diagnosis, and hopes to provide substantial help for COVID-19 diagnosis.
|
Computer methods and programs in biomedicine
| 2022-09-12T00:00:00
|
[
"LuoyuLian",
"XinLuo",
"CanyuPan",
"JinlongHuang",
"WenshanHong",
"ZhendongXu"
] |
10.1016/j.cmpb.2022.107097
|
Detection of COVID-19 in Point of Care Lung Ultrasound.
|
The coronavirus disease 2019 (COVID-19) evolved into a global pandemic, responsible for a significant number of infections and deaths. In this scenario, point-of-care ultrasound (POCUS) has emerged as a viable and safe imaging modality. Computer vision (CV) solutions have been proposed to aid clinicians in POCUS image interpretation, namely detection/segmentation of structures and image/patient classification but relevant challenges still remain. As such, the aim of this study is to develop CV algorithms, using Deep Learning techniques, to create tools that can aid doctors in the diagnosis of viral and bacterial pneumonia (VP and BP) through POCUS exams. To do so, convolutional neural networks were designed to perform in classification tasks. The architectures chosen to build these models were the VGG16, ResNet50, DenseNet169 e MobileNetV2. Patients images were divided in three classes: healthy (HE), BP and VP (which includes COVID-19). Through a comparative study, which was based on several performance metrics, the model based on the DenseNet169 architecture was designated as the best performing model, achieving 78% average accuracy value of the five iterations of 5- Fold Cross-Validation. Given that the currently available POCUS datasets for COVID-19 are still limited, the training of the models was negatively affected by such and the models were not tested in an independent dataset. Furthermore, it was also not possible to perform lesion detection tasks. Nonetheless, in order to provide explainability and understanding of the models, Gradient-weighted Class Activation Mapping (GradCAM) were used as a tool to highlight the most relevant classification regions. Clinical relevance - Reveals the potential of POCUS to support COVID-19 screening. The results are very promising although the dataset is limite.
|
Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference
| 2022-09-11T00:00:00
|
[
"JoanaMaximino",
"MiguelCoimbra",
"JoaoPedrosa"
] |
10.1109/EMBC48229.2022.9871235
|
Dynamic Classification of Imageless Bioelectrical Impedance Tomography Features with Attention-Driven Spatial Transformer Neural Network.
|
Point-of-Care monitoring devices have proven to be pivotal in the timely screening and intervention of critical care patients. The urgent demands for their deployment in the COVID-19 pandemic era has translated into the escalation of rapid, reliable, and low-cost monitoring systems research and development. Electrical Impedance Tomography (EIT) is a highly promising modality in providing deep tissue imaging that aids in patient bedside diagnosis and treatment. Motivated to bring forth an accurate and intelligent EIT screening system, we bypassed the complexity and challenges typically associated with its image reconstruction and feature identification processes by solely focusing on the raw data output to extract the embedded knowledge. We developed a novel machine learning architecture based on an attention-driven spatial transformer neural network to specifically accommodate for the patterns and dependencies within EIT raw data. Through elaborate precision-mapped phantom experiments, we validated the reproduction and recognition of features with systemically controlled changes. We demonstrated over 95% accuracy via state-of-the-art machine learning models, and an enhanced performance using our adapted transformer pipeline with shorter training time and greater computational efficiency. Our approach of using imageless EIT driven by a novel attention-focused feature learning algorithm is highly promising in revolutionizing conventional EIT operations and augmenting its practical usage in medicine and beyond.
|
Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference
| 2022-09-11T00:00:00
|
[
"MingdeZheng",
"HassanJahanandish",
"HongweiLi"
] |
10.1109/EMBC48229.2022.9870921
|
Transfer Learning for Automated COVID-19 B-Line Classification in Lung Ultrasound.
|
Lung ultrasound (LUS) as a diagnostic tool is gaining support for its role in the diagnosis and management of COVID-19 and a number of other lung pathologies. B-lines are a predominant feature in COVID-19, however LUS requires a skilled clinician to interpret findings. To facilitate the interpretation, our main objective was to develop automated methods to classify B-lines as pathologic vs. normal. We developed transfer learning models based on ResNet networks to classify B-lines as pathologic (at least 3 B-lines per lung field) vs. normal using COVID-19 LUS data. Assessment of B-line severity on a 0-4 multi-class scale was also explored. For binary B-line classification, at the frame-level, all ResNet models pretrained with ImageNet yielded higher performance than the baseline nonpretrained ResNet-18. Pretrained ResNet-18 has the best Equal Error Rate (EER) of 9.1% vs the baseline of 11.9%. At the clip-level, all pretrained network models resulted in better Cohen's kappa agreement (linear-weighted) and clip score accuracy, with the pretrained ResNet-18 having the best Cohen's kappa of 0.815 [95% CI: 0.804-0.826], and ResNet-101 the best clip scoring accuracy of 93.6%. Similar results were shown for multi-class scoring, where pretrained network models outperformed the baseline model. A class activation map is also presented to guide clinicians in interpreting LUS findings. Future work aims to further improve the multi-class assessment for severity of B-lines with a more diverse LUS dataset.
|
Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference
| 2022-09-11T00:00:00
|
[
"Joseph RPare",
"Lars AGjesteby",
"Brian ATelfer",
"Melinda MTonelli",
"Megan MLeo",
"EhabBillatos",
"JonathanScalera",
"Laura JBrattain"
] |
10.1109/EMBC48229.2022.9871894
|
Wasserstein GAN based Chest X-Ray Dataset Augmentation for Deep Learning Models: COVID-19 Detection Use-Case.
|
The novel coronavirus infection (COVID-19) is still continuing to be a concern for the entire globe. Since early detection of COVID-19 is of particular importance, there have been multiple research efforts to supplement the current standard RT-PCR tests. Several deep learning models, with varying effectiveness, using Chest X-Ray images for such diagnosis have also been proposed. While some of the models are quite promising, there still remains a dearth of training data for such deep learning models. The present paper attempts to provide a viable solution to the problem of data deficiency in COVID-19 CXR images. We show that the use of a Wasserstein Generative Adversarial Network (WGAN) could lead to an effective and lightweight solution. It is demonstrated that the WGAN generated images are at par with the original images using inference tests on an already proposed COVID-19 detection model.
|
Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference
| 2022-09-11T00:00:00
|
[
"B ZahidHussain",
"IfrahAndleeb",
"Mohammad SamarAnsari",
"Amit MaheshJoshi",
"NadiaKanwal"
] |
10.1109/EMBC48229.2022.9871519
|
Distance-based detection of out-of-distribution silent failures for Covid-19 lung lesion segmentation.
|
Automatic segmentation of ground glass opacities and consolidations in chest computer tomography (CT) scans can potentially ease the burden of radiologists during times of high resource utilisation. However, deep learning models are not trusted in the clinical routine due to failing silently on out-of-distribution (OOD) data. We propose a lightweight OOD detection method that leverages the Mahalanobis distance in the feature space and seamlessly integrates into state-of-the-art segmentation pipelines. The simple approach can even augment pre-trained models with clinically relevant uncertainty quantification. We validate our method across four chest CT distribution shifts and two magnetic resonance imaging applications, namely segmentation of the hippocampus and the prostate. Our results show that the proposed method effectively detects far- and near-OOD samples across all explored scenarios.
|
Medical image analysis
| 2022-09-10T00:00:00
|
[
"CamilaGonzález",
"KarolGotkowski",
"MoritzFuchs",
"AndreasBucher",
"ArminDadras",
"RicardaFischbach",
"Isabel JasminKaltenborn",
"AnirbanMukhopadhyay"
] |
10.1016/j.media.2022.102596
10.7937/K9/TCIA.2015.zF0vlOPv
10.5281/zenodo.3757476
10.1016/j.cmpb.2021.106236
10.1055/a-1544-2240
|
COVID-19 diagnosis via chest X-ray image classification based on multiscale class residual attention.
|
Aiming at detecting COVID-19 effectively, a multiscale class residual attention (MCRA) network is proposed via chest X-ray (CXR) image classification. First, to overcome the data shortage and improve the robustness of our network, a pixel-level image mixing of local regions was introduced to achieve data augmentation and reduce noise. Secondly, multi-scale fusion strategy was adopted to extract global contextual information at different scales and enhance semantic representation. Last but not least, class residual attention was employed to generate spatial attention for each class, which can avoid inter-class interference and enhance related features to further improve the COVID-19 detection. Experimental results show that our network achieves superior diagnostic performance on COVIDx dataset, and its accuracy, PPV, sensitivity, specificity and F1-score are 97.71%, 96.76%, 96.56%, 98.96% and 96.64%, respectively; moreover, the heat maps can endow our deep model with somewhat interpretability.
|
Computers in biology and medicine
| 2022-09-10T00:00:00
|
[
"ShangwangLiu",
"TongboCai",
"XiufangTang",
"YangyangZhang",
"ChanggengWang"
] |
10.1016/j.compbiomed.2022.106065
10.1016/j.compbiomed.2022.105350
10.1109/TIP.2021.3058783
10.1007/s00330-020-07268-9
10.1109/TNNLS.2021.3086570
10.1109/TBDATA.2017.2717439
10.1007/s11063-021-10569-9
10.32604/cmes.2020.09463
10.1016/j.micpro.2020.103282
10.2174/1574893615666200207094357
10.1016/j.compbiomed.2022.105383
10.1007/s10489-021-02393-4
10.1109/TMI.2021.3117564
10.1016/j.asoc.2021.108041
10.1016/j.compbiomed.2021.105002
10.1016/j.media.2020.101839
10.1007/s11063-022-10742-8
10.1016/j.compbiomed.2021.105127
10.1007/s10489-021-02572-3
10.1109/TMI.2021.3127074
10.1007/s00521-021-06806-w
10.1016/j.compbiomed.2022.105604
10.1016/j.compbiomed.2022.105244
10.1016/j.compbiomed.2022.105210
10.1007/s10489-021-02691-x
10.1016/j.compbiomed.2022.105335
10.1109/TIM.2021.3128703
10.1109/TGRS.2021.3080580
10.1109/TIP.2021.3124668
10.1109/TIP.2021.3127851
10.1109/TGRS.2021.3056624
10.1016/j.neucom.2021.12.077
10.1109/TIP.2022.3144017
10.1109/TMI.2021.3140120
10.1016/j.media.2021.102345
10.1109/TIP.2021.3139232
10.1109/TIP.2022.3154931
10.1016/j.neucom.2021.11.104
10.1016/j.media.2022.102381
10.1109/TPAMI.2020.3040258
10.1109/TPAMI.2020.3026069
10.1109/CVPR.2016.90
|
Semantic-Powered Explainable Model-Free Few-Shot Learning Scheme of Diagnosing COVID-19 on Chest X-Ray.
|
Chest X-ray (CXR) is commonly performed as an initial investigation in COVID-19, whose fast and accurate diagnosis is critical. Recently, deep learning has a great potential in detecting people who are suspected to be infected with COVID-19. However, deep learning resulting with black-box models, which often breaks down when forced to make predictions about data for which limited supervised information is available and lack inter-pretability, still is a major barrier for clinical integration. In this work, we hereby propose a semantic-powered explainable model-free few-shot learning scheme to quickly and precisely diagnose COVID-19 with higher reliability and transparency. Specifically, we design a Report Image Explanation Cell (RIEC) to exploit clinically indicators derived from radiology reports as interpretable driver to introduce prior knowledge at training. Meanwhile, multi-task collaborative diagnosis strategy (MCDS) is developed to construct N-way K-shot tasks, which adopts a cyclic and collaborative training approach for producing better generalization performance on new tasks. Extensive experiments demonstrate that the proposed scheme achieves competitive results (accuracy of 98.91%, precision of 98.95%, recall of 97.94% and F1-score of 98.57%) to diagnose COVID-19 and other pneumonia infected categories, even with only 200 paired CXR images and radiology reports for training. Furthermore, statistical results of comparative experiments show that our scheme provides an interpretable window into the COVID-19 diagnosis to improve the performance of the small sample size, the reliability and transparency of black-box deep learning models. Our source codes will be released on https://github.com/AI-medical-diagnosis-team-of-JNU/SPEMFSL-Diagnosis-COVID-19.
|
IEEE journal of biomedical and health informatics
| 2022-09-09T00:00:00
|
[
"YihangWang",
"ChunjuanJiang",
"YouqingWu",
"TianxuLv",
"HengSun",
"YuanLiu",
"LihuaLi",
"XiangPan"
] |
10.1109/JBHI.2022.3205167
|
Ensemble of Deep Neural Networks based on Condorcet's Jury Theorem for screening Covid-19 and Pneumonia from radiograph images.
|
COVID-19 detection using Artificial Intelligence and Computer-Aided Diagnosis has been the subject of several studies. Deep Neural Networks with hundreds or even millions of parameters (weights) are referred to as "black boxes" because their behavior is difficult to comprehend, even when the model's structure and weights are visible. On the same dataset, different Deep Convolutional Neural Networks perform differently. So, we do not necessarily have to rely on just one model; instead, we can evaluate our final score by combining multiple models. While including multiple models in the voter pool, it is not always true that the accuracy will improve. So, In this regard, the authors proposed a novel approach to determine the voting ensemble score of individual classifiers based on Condorcet's Jury Theorem (CJT). The authors demonstrated that the theorem holds while ensembling the N number of classifiers in Neural Networks. With the help of CJT, the authors proved that a model's presence in the voter pool would improve the likelihood that the majority vote will be accurate if it is more accurate than the other models. Besides this, the authors also proposed a Domain Extended Transfer Learning (DETL) ensemble model as a soft voting ensemble method and compared it with CJT based ensemble method. Furthermore, as deep learning models typically fail in real-world testing, a novel dataset has been used with no duplicate images. Duplicates in the dataset are quite problematic since they might affect the training process. Therefore, having a dataset devoid of duplicate images is considered to prevent data leakage problems that might impede the thorough assessment of the trained models. The authors also employed an algorithm for faster training to save computational efforts. Our proposed method and experimental results outperformed the state-of-the-art with the DETL-based ensemble model showing an accuracy of 97.26%, COVID-19, sensitivity of 98.37%, and specificity of 100%. CJT-based ensemble model showed an accuracy of 98.22%, COVID-19, sensitivity of 98.37%, and specificity of 99.79%.
|
Computers in biology and medicine
| 2022-09-06T00:00:00
|
[
"GauravSrivastava",
"NiteshPradhan",
"YashwinSaini"
] |
10.1016/j.compbiomed.2022.105979
10.1109/TII.2021.3057683
10.1109/TII.2021.3057524
|
Deep learning framework for prediction of infection severity of COVID-19.
|
With the onset of the COVID-19 pandemic, quantifying the condition of positively diagnosed patients is of paramount importance. Chest CT scans can be used to measure the severity of a lung infection and the isolate involvement sites in order to increase awareness of a patient's disease progression. In this work, we developed a deep learning framework for lung infection severity prediction. To this end, we collected a dataset of 232 chest CT scans and involved two public datasets with an additional 59 scans for our model's training and used two external test sets with 21 scans for evaluation. On an input chest Computer Tomography (CT) scan, our framework, in parallel, performs a lung lobe segmentation utilizing a pre-trained model and infection segmentation using three distinct trained
|
Frontiers in medicine
| 2022-09-06T00:00:00
|
[
"MehdiYousefzadeh",
"MasoudHasanpour",
"MozhdehZolghadri",
"FatemehSalimi",
"AvaYektaeian Vaziri",
"AbolfazlMahmoudi Aqeel Abadi",
"RamezanJafari",
"ParsaEsfahanian",
"Mohammad-RezaNazem-Zadeh"
] |
10.3389/fmed.2022.940960
10.1016/j.idm.2020.02.002
10.1101/2020.02.27.20028027
10.1101/2020.02.07.937862
10.1148/radiol.2020200642
10.1148/radiol.2020200343
10.1371/journal.pone.0250952
10.1038/nature14539
10.1038/s41598-019-51503-3
10.1038/s41591-019-0447-x
10.1038/s41598-019-56589-3
10.1109/TNNLS.2019.2892409
10.1038/s41591-020-0931-3
10.1016/j.patcog.2018.07.031
10.1016/j.dsx.2020.04.012
10.1109/RBME.2020.2987975
10.1016/S2589-7500(20)30054-6
10.1148/ryct.2020200075
10.1101/2020.03.24.20041020
10.1136/bmj.m1328
10.1007/s42979-020-00216-w
10.1007/s10916-020-01582-x
10.1109/TAI.2020.3020521
10.1007/s10278-019-00227-x
10.1038/srep46479
10.1109/JBHI.2017.2725903
10.1002/mp.14676
10.1007/s00330-020-07042-x
10.1016/j.knosys.2020.106647
10.1109/TMI.2020.2995108
10.1109/ISBI.2019.8759468
10.1007/s10278-019-00223-1
10.1186/s41747-020-00173-2
10.1038/s41598-020-76282-0
10.48550/arXiv.2003.11988
10.1007/s10489-020-01829-7
10.1016/j.patcog.2020.107747
10.3389/fpubh.2020.00357
10.48550/arXiv.2006.05018
10.1101/2020.05.20.20108159
10.1101/2020.05.20.20100362
10.1007/978-3-319-24574-4_28
10.48550/arXiv.1511.07122
10.1109/TPAMI.2017.2699184
10.1118/1.3611983
10.1016/j.imu.2022.100935
|
A novel adaptive cubic quasi-Newton optimizer for deep learning based medical image analysis tasks, validated on detection of COVID-19 and segmentation for COVID-19 lung infection, liver tumor, and optic disc/cup.
|
Most of existing deep learning research in medical image analysis is focused on networks with stronger performance. These networks have achieved success, while their architectures are complex and even contain massive parameters ranging from thousands to millions in numbers. The nature of high dimension and nonconvex makes it easy to train a suboptimal model through the popular stochastic first-order optimizers, which only use gradient information.
Our purpose is to design an adaptive cubic quasi-Newton optimizer, which could help to escape from suboptimal solution and improve the performance of deep neural networks on four medical image analysis tasks including: detection of COVID-19, COVID-19 lung infection segmentation, liver tumor segmentation, optic disc/cup segmentation.
In this work, we introduce a novel adaptive cubic quasi-Newton optimizer with high-order moment (termed ACQN-H) for medical image analysis. The optimizer dynamically captures the curvature of the loss function by diagonally approximated Hessian and the norm of difference between previous two estimates, which helps to escape from saddle points more efficiently. In addition, to reduce the variance introduced by the stochastic nature of the problem, ACQN-H hires high-order moment through exponential moving average on iteratively calculated approximated Hessian matrix. Extensive experiments are performed to access the performance of ACQN-H. These include detection of COVID-19 using COVID-Net on dataset COVID-chestxray, which contains 16 565 training samples and 1841 test samples; COVID-19 lung infection segmentation using Inf-Net on COVID-CT, which contains 45, 5, and 5 computer tomography (CT) images for training, validation, and testing, respectively; liver tumor segmentation using ResUNet on LiTS2017, which consists of 50 622 abdominal scan images for training and 26 608 images for testing; optic disc/cup segmentation using MRNet on RIGA, which has 655 color fundus images for training and 95 for testing. The results are compared with commonly used stochastic first-order optimizers such as Adam, SGD, and AdaBound, and recently proposed stochastic quasi-Newton optimizer Apollo. In task detection of COVID-19, we use classification accuracy as the evaluation metric. For the other three medical image segmentation tasks, seven commonly used evaluation metrics are utilized, that is, Dice, structure measure, enhanced-alignment measure (EM), mean absolute error (MAE), intersection over union (IoU), true positive rate (TPR), and true negative rate.
Experiments on four tasks show that ACQN-H achieves improvements over other stochastic optimizers: (1) comparing with AdaBound, ACQN-H achieves 0.49%, 0.11%, and 0.70% higher accuracy on the COVID-chestxray dataset using network COVID-Net with VGG16, ResNet50 and DenseNet121 as backbones, respectively; (2) ACQN-H has the best scores in terms of evaluation metrics Dice, TPR, EM, and MAE on COVID-CT dataset using network Inf-Net. Particularly, ACQN-H achieves 1.0% better Dice as compared to Apollo; (3) ACQN-H achieves the best results on LiTS2017 dataset using network ResUNet, and outperforms Adam in terms of Dice by 2.3%; (4) ACQN-H improves the performance of network MRNet on RIGA dataset, and achieves 0.5% and 1.0% better scores on cup segmentation for Dice and IoU, respectively, compared with SGD. We also present fivefold validation results of four tasks. It can be found that the results on detection of COVID-19, liver tumor segmentation and optic disc/cup segmentation can achieve high performance with low variance. For COVID-19 lung infection segmentation, the variance on test set is much larger than on validation set, which may due to small size of dataset.
The proposed optimizer ACQN-H has been validated on four medical image analysis tasks including: detection of COVID-19 using COVID-Net on COVID-chestxray, COVID-19 lung infection segmentation using Inf-Net on COVID-CT, liver tumor segmentation using ResUNet on LiTS2017, optic disc/cup segmentation using MRNet on RIGA. Experiments show that ACQN-H can achieve some performance improvement. Moreover, the work is expected to boost the performance of existing deep learning networks in medical image analysis.
|
Medical physics
| 2022-09-05T00:00:00
|
[
"YanLiu",
"MaojunZhang",
"ZhiweiZhong",
"XiangrongZeng"
] |
10.1002/mp.15969
|
Deep Convolutional Neural Network Mechanism Assessment of COVID-19 Severity.
|
As an epidemic, COVID-19's core test instrument still has serious flaws. To improve the present condition, all capabilities and tools available in this field are being used to combat the pandemic. Because of the contagious characteristics of the unique coronavirus (COVID-19) infection, an overwhelming comparison with patients queues up for pulmonary X-rays, overloading physicians and radiology and significantly impacting the quality of care, diagnosis, and outbreak prevention. Given the scarcity of clinical services such as intensive care and motorized ventilation systems in the aspect of this vastly transmissible ailment, it is critical to categorize patients as per their risk categories. This research describes a novel use of the deep convolutional neural network (CNN) technique to COVID-19 illness assessment seriousness. Utilizing chest X-ray images as contribution, an unsupervised DCNN model is constructed and suggested to split COVID-19 individuals into four seriousness classrooms: low, medium, serious, and crucial with an accuracy level of 96 percent. The efficiency of the DCNN model developed with the proposed methodology is demonstrated by empirical findings on a suitably huge sum of chest X-ray scans. To the evidence relating, it is the first COVID-19 disease incidence evaluation research with four different phases, to use a reasonably high number of X-ray images dataset and a DCNN with nearly all hyperparameters dynamically adjusted by the variable selection optimization task.
|
BioMed research international
| 2022-09-03T00:00:00
|
[
"JNirmaladevi",
"MVidhyalakshmi",
"E BijolinEdwin",
"NVenkateswaran",
"VinayAvasthi",
"Abdullah AAlarfaj",
"Abdurahman HajinurHirad",
"R KRajendran",
"TegegneAyalewHailu"
] |
10.1155/2022/1289221
10.1109/ISMSIT50672.2020.9255149
10.3390/electronics10141677
10.1101/2020.06.25.20140004
10.1002/pa.2537
10.1016/j.asoc.2020.106912
10.1109/SMART-TECH49988.2020.00041
10.1155/2022/4352730
10.3390/a13100249
10.1016/j.idm.2020.03.002
10.1155/2021/5709257
10.1007/s42600-020-00105-4
10.4066/biomedicalresearch.29-18-886
10.1007/s40747-021-00312-1
10.1155/2020/8856801
10.1007/s10489-020-01829-7
10.1155/2021/6927985
10.1155/2021/5587188
10.1016/j.chaos.2020.110056
10.3390/jpm11050343
10.3390/jcm9061668
10.1016/j.cmpb.2019.06.023
10.3390/info12030109
10.1016/j.chaos.2020.110059
10.1016/j.ipm.2021.102809
10.1371/journal.pone.0241332
10.3389/fimmu.2020.01581
10.1109/ACCESS.2020.2997311
10.1016/j.patter.2020.100074
10.1148/radiol.2021204531
10.1016/S2589-7500(20)30162-X
|
Deep learning-based patient re-identification is able to exploit the biometric nature of medical chest X-ray data.
|
With the rise and ever-increasing potential of deep learning techniques in recent years, publicly available medical datasets became a key factor to enable reproducible development of diagnostic algorithms in the medical domain. Medical data contains sensitive patient-related information and is therefore usually anonymized by removing patient identifiers, e.g., patient names before publication. To the best of our knowledge, we are the first to show that a well-trained deep learning system is able to recover the patient identity from chest X-ray data. We demonstrate this using the publicly available large-scale ChestX-ray14 dataset, a collection of 112,120 frontal-view chest X-ray images from 30,805 unique patients. Our verification system is able to identify whether two frontal chest X-ray images are from the same person with an AUC of 0.9940 and a classification accuracy of 95.55%. We further highlight that the proposed system is able to reveal the same person even ten and more years after the initial scan. When pursuing a retrieval approach, we observe an mAP@R of 0.9748 and a precision@1 of 0.9963. Furthermore, we achieve an AUC of up to 0.9870 and a precision@1 of up to 0.9444 when evaluating our trained networks on external datasets such as CheXpert and the COVID-19 Image Data Collection. Based on this high identification rate, a potential attacker may leak patient-related information and additionally cross-reference images to obtain more information. Thus, there is a great risk of sensitive content falling into unauthorized hands or being disseminated against the will of the concerned patients. Especially during the COVID-19 pandemic, numerous chest X-ray datasets have been published to advance research. Therefore, such data may be vulnerable to potential attacks by deep learning-based re-identification algorithms.
|
Scientific reports
| 2022-09-02T00:00:00
|
[
"KaiPackhäuser",
"SebastianGündel",
"NicolasMünster",
"ChristopherSyben",
"VincentChristlein",
"AndreasMaier"
] |
10.1038/s41598-022-19045-3
10.1378/chest.10-1302
10.1038/s41598-019-56847-4
10.2214/AJR.12.10375
10.1038/nature14539
10.1016/j.zemedi.2018.12.003
10.1016/j.acra.2019.10.006
10.1016/S0197-2456(00)00097-0
10.1007/s40256-020-00420-2
10.1148/radiol.2020192224
10.1007/s10278-006-1051-4
10.1142/S0218488502001648
10.1016/j.csl.2019.06.001
10.1145/1866739.1866758
10.1561/0400000042
10.1038/s42256-020-0186-1
10.1109/MSP.2013.2259911
10.1038/s41746-020-00323-1
10.1038/s42256-021-00337-8
10.1126/science.aab3050
10.1109/5.726791
10.1016/j.patrec.2005.10.010
|
Point-of-care SARS-CoV-2 sensing using lens-free imaging and a deep learning-assisted quantitative agglutination assay.
|
The persistence of the global COVID-19 pandemic caused by the SARS-CoV-2 virus has continued to emphasize the need for point-of-care (POC) diagnostic tests for viral diagnosis. The most widely used tests, lateral flow assays used in rapid antigen tests, and reverse-transcriptase real-time polymerase chain reaction (RT-PCR), have been instrumental in mitigating the impact of new waves of the pandemic, but fail to provide both sensitive and rapid readout to patients. Here, we present a portable lens-free imaging system coupled with a particle agglutination assay as a novel biosensor for SARS-CoV-2. This sensor images and quantifies individual microbeads undergoing agglutination through a combination of computational imaging and deep learning as a way to detect levels of SARS-CoV-2 in a complex sample. SARS-CoV-2 pseudovirus in solution is incubated with acetyl cholinesterase 2 (ACE2)-functionalized microbeads then loaded into an inexpensive imaging chip. The sample is imaged in a portable in-line lens-free holographic microscope and an image is reconstructed from a pixel superresolved hologram. Images are analyzed by a deep-learning algorithm that distinguishes microbead agglutination from cell debris and viral particle aggregates, and agglutination is quantified based on the network output. We propose an assay procedure using two images which results in the accurate determination of viral concentrations greater than the limit of detection (LOD) of 1.27 × 10
|
Lab on a chip
| 2022-09-02T00:00:00
|
[
"Colin JPotter",
"YanmeiHu",
"ZhenXiong",
"JunWang",
"EuanMcLeod"
] |
10.1039/d2lc00289b
|
Automated COVID-19 Classification Using Heap-Based Optimization with the Deep Transfer Learning Model.
|
The outbreak of the COVID-19 pandemic necessitates prompt identification of affected persons to restrict the spread of the COVID-19 epidemic. Radiological imaging such as computed tomography (CT) and chest X-rays (CXR) is considered an effective way to diagnose COVID-19. However, it needs an expert's knowledge and consumes more time. At the same time, artificial intelligence (AI) and medical images are discovered to be helpful in effectively assessing and providing treatment for COVID-19 infected patients. In particular, deep learning (DL) models act as a vital part of a high-performance classification model for COVID-19 recognition on CXR images. This study develops a heap-based optimization with the deep transfer learning model for detection and classification (HBODTL-DC) of COVID-19. The proposed HBODTL-DC system majorly focuses on the identification of COVID-19 on CXR images. To do so, the presented HBODTL-DC model initially exploits the Gabor filtering (GF) technique to enhance the image quality. In addition, the HBO algorithm with a neural architecture search network (NasNet) large model is employed for the extraction of feature vectors. Finally, Elman Neural Network (ENN) model gets the feature vectors as input and categorizes the CXR images into distinct classes. The experimental validation of the HBODTL-DC model takes place on the benchmark CXR image dataset from the Kaggle repository, and the outcomes are checked in numerous dimensions. The experimental outcomes stated the supremacy of the HBODTL-DC model over recent approaches with a maximum accuracy of 0.9992.
|
Computational intelligence and neuroscience
| 2022-09-02T00:00:00
|
[
"BahjatFakieh",
"MahmoudRagab"
] |
10.1155/2022/7508836
10.3390/jpm12020309
10.3390/s21217286
10.3390/ijerph18063056
10.1016/j.patrec.2021.08.018
10.3390/app11199023
10.1016/j.bbe.2020.08.008
10.1155/2020/8828855
10.1016/j.eswa.2020.114054
10.1007/s11760-020-01820-2
10.1016/j.cmpb.2020.105581
10.1109/SSCI47803.2020.9308571
10.3390/s21041480
10.1109/access.2020.3025010
10.3390/diagnostics11050895
10.1016/j.patcog.2014.01.006
10.1016/j.eswa.2020.113702
10.1016/j.asej.2022.101728
10.1177/0361198120967943
10.1016/j.compbiomed.2021.104816
10.1155/2022/6074538
10.1155/2022/6185013
|
COV-RadNet: A Deep Convolutional Neural Network for Automatic Detection of COVID-19 from Chest X-Rays and CT Scans.
|
With the increase in severity of COVID-19 pandemic situation, the world is facing a critical fight to cope up with the impacts on human health, education and economy. The ongoing battle with the novel corona virus, is showing much priority to diagnose and provide rapid treatment to the patients. The rapid growth of COVID-19 has broken the healthcare system of the affected countries, creating a shortage in ICUs, test kits, ventilation support system. etc. This paper aims at finding an automatic COVID-19 detection approach which will assist the medical practitioners to diagnose the disease quickly and effectively. In this paper, a deep convolutional neural network, 'COV-RadNet' is proposed to detect COVID positive, viral pneumonia, lung opacity and normal, healthy people by analyzing their Chest Radiographic (X-ray and CT scans) images. Data augmentation technique is applied to balance the dataset 'COVID 19 Radiography Dataset' to make the classifier more robust to the classification task. We have applied transfer learning approach using four deep learning based models: VGG16, VGG19, ResNet152 and ResNext 101 to detect COVID-19 from chest X-ray images. We have achieved 97% classification accuracy using our proposed COV-RadNet model for COVID/Viral Pneumonia/Lungs Opacity/Normal, 99.5% accuracy to detect COVID/Viral Pneumonia/Normal and 99.72% accuracy to detect COVID and non-COVID people. Using chest CT scan images, we have found 99.25% accuracy to classify between COVID and non-COVID classes. Among the performance of the pre-trained models, ResNext 101 has shown the highest accuracy of 98.5% for multiclass classification (COVID, viral pneumonia, Lungs opacity and normal).
|
Computer methods and programs in biomedicine update
| 2022-08-31T00:00:00
|
[
"Md KhairulIslam",
"Sultana UmmeHabiba",
"Tahsin AhmedKhan",
"FarzanaTasnim"
] |
10.1016/j.cmpbup.2022.100064
|
Chest X-ray analysis empowered with deep learning: A systematic review.
|
Chest radiographs are widely used in the medical domain and at present, chest X-radiation particularly plays an important role in the diagnosis of medical conditions such as pneumonia and COVID-19 disease. The recent developments of deep learning techniques led to a promising performance in medical image classification and prediction tasks. With the availability of chest X-ray datasets and emerging trends in data engineering techniques, there is a growth in recent related publications. Recently, there have been only a few survey papers that addressed chest X-ray classification using deep learning techniques. However, they lack the analysis of the trends of recent studies. This systematic review paper explores and provides a comprehensive analysis of the related studies that have used deep learning techniques to analyze chest X-ray images. We present the state-of-the-art deep learning based pneumonia and COVID-19 detection solutions, trends in recent studies, publicly available datasets, guidance to follow a deep learning process, challenges and potential future research directions in this domain. The discoveries and the conclusions of the reviewed work have been organized in a way that researchers and developers working in the same domain can use this work to support them in taking decisions on their research.
|
Applied soft computing
| 2022-08-30T00:00:00
|
[
"DulaniMeedeniya",
"HasharaKumarasinghe",
"ShammiKolonne",
"ChamodiFernando",
"Isabel De la TorreDíez",
"GonçaloMarques"
] |
10.1016/j.asoc.2022.109319
10.1038/s41392-020-00243-2
10.1148/ryct.2020200028
10.1016/j.compmedimag.2019.05.005
10.1109/42.974918
10.1109/ACCESS.2021.3065965
10.3390/app10020559
10.1007/s13246-020-00865-4
10.1016/B978-0-12-819061-6.00013-6
10.1007/s11633-020-1231-6
10.3390/jimaging6120131
10.1016/j.media.2021.102125
10.30534/ijeter/2021/09972021
10.1016/j.scs.2020.102589
10.1109/MCI.2020.3019873
10.1109/ic-ETITE47903.2020.152
10.1016/j.compbiomed.2020.103898
10.1097/01.NAJ.0000444496.24228.2c
10.1016/j.chaos.2020.110337
10.1007/978-981-15-7219-7_22
10.1109/EMBC44109.2020.9175594
10.3390/diagnostics10060417
10.1109/CVPR.2016.90
10.1109/CVPR.2017.243
10.48550/arXiv.1602.07360
10.1109/CVPR.2017.195
10.1007/978-3-319-24574-4_28
10.1016/j.compmedimag.2017.04.001
10.1016/j.asoc.2020.106580
10.1016/j.asoc.2020.106691
10.1017/9781139061773
10.1109/IES50839.2020.9231540
10.1109/ICOSEC49089.2020.9215257
10.1145/3431804
10.1109/KSE.2018.8573404
10.1038/s41598-020-76550-z
10.1016/j.chaos.2020.109944
10.1109/INDIACom51348.2021.00137
10.1109/ICCCNT49239.2020.9225543
10.1109/ISRITI51436.2020.9315478
10.1007/s10044-021-00970-4
10.1155/2021/5513679
10.1002/ima.22566
10.1016/j.compbiomed.2020.103805
10.1016/j.knosys.2020.106062
10.1117/12.2547635
10.1109/TMI.2013.2290491
10.1109/TMI.2013.2284099
10.1007/978-3-030-32254-0_74
10.1109/ICVEE50212.2020.9243290
10.1109/EMBC44109.2020.9176517
10.1109/EBBT.2019.8741582
10.1007/s00264-020-04609-7
10.1007/s10489-020-01829-7
10.1016/j.imu.2020.100405
10.1109/ICECOCS50124.2020.9314567
10.1016/j.jjimei.2021.100020
10.1101/2020.03.26.20044610
10.1109/ACCESS.2021.3086229
10.1117/12.2581314
10.1109/RIVF48685.2020.9140733
10.1109/ACCESS.2020.2974242
10.31661/jbpe.v0i0.2008-1153
10.1109/ICISS49785.2020.9316100
10.1109/DASA53625.2021.9682248
10.48550/arXiv.1711.05225
10.1109/ICECCT.2019.8869364
10.1016/j.chaos.2020.110122
10.1016/j.patrec.2020.09.010
10.1007/s42600-021-00151-6
10.1016/j.eswa.2021.114883
10.1016/j.mehy.2020.109761
10.1109/ICECCE49384.2020.9179404
10.1155/2020/8828855
10.48550/arXiv.1409.1556
10.1109/CVPR.2018.00474
10.5614/itbj.ict.res.appl.2019.13.3.5
10.1117/12.2581882
10.17632/rscbjbr9sj.3
10.1155/2019/4180949
10.5220/0007404301120119
10.1007/s12559-020-09795-5
10.1016/j.cmpb.2020.105581
10.1177/2472630320958376
10.3390/s21041480
10.1109/ACCESS.2020.3010287
10.1016/j.compbiomed.2021.104319
10.1016/j.chaos.2020.110245
10.1109/Confluence47617.2020.9057809
10.1109/CCECE.2019.8861969
10.1007/978-981-15-3369-3_36
10.1142/S0218001421510046
10.1109/EBBT.2019.8742050
10.34740/kaggle/dsv/1019469
10.17632/2fxz4px6d8.4
10.1016/j.compbiomed.2020.103792
10.1016/j.measurement.2019.05.076
10.1109/EIT48999.2020.9208232
10.12928/TELKOMNIKA.v18i3.14751
10.5220/0007346600760083
10.32604/cmc.2021.018514
10.1016/j.sysarc.2019.101635
10.1007/978-3-642-21219-2_1
10.1007/978-3-030-32248-9_45
10.1109/ICARC54489.2022.9753811
10.3991/ijoe.v18i07.30807
10.48550/arXiv.1701.03757
10.1016/j.simpa.2022.100340
10.1148/ryai.2020190043
10.1007/s00521-021-06396-7
|
SEL-COVIDNET: An intelligent application for the diagnosis of COVID-19 from chest X-rays and CT-scans.
|
COVID-19 detection from medical imaging is a difficult challenge that has piqued the interest of experts worldwide. Chest X-rays and computed tomography (CT) scanning are the essential imaging modalities for diagnosing COVID-19. All researchers focus their efforts on developing viable methods and rapid treatment procedures for this pandemic. Fast and accurate automated detection approaches have been devised to alleviate the need for medical professionals. Deep Learning (DL) technologies have successfully recognized COVID-19 situations. This paper proposes a developed set of nine deep learning models for diagnosing COVID-19 based on transfer learning and implementation in a novel architecture (SEL-COVIDNET). We include a global average pooling layer, flattening, and two dense layers that are fully connected. The model's effectiveness is evaluated using balanced and unbalanced COVID-19 radiography datasets. After that, our model's performance is analyzed using six evaluation measures: accuracy, sensitivity, specificity, precision, F1-score, and Matthew's correlation coefficient (MCC). Experiments demonstrated that the proposed SEL-COVIDNET with tuned DenseNet121, InceptionResNetV2, and MobileNetV3Large models outperformed the results of comparative SOTA for multi-class classification (COVID-19 vs. No-finding vs. Pneumonia) in terms of accuracy (98.52%), specificity (98.5%), sensitivity (98.5%), precision (98.7%), F1-score (98.7%), and MCC (97.5%). For the COVID-19 vs. No-finding classification, our method had an accuracy of 99.77%, a specificity of 99.85%, a sensitivity of 99.85%, a precision of 99.55%, an F1-score of 99.7%, and an MCC of 99.4%. The proposed model offers an accurate approach for detecting COVID-19 patients, which aids in the containment of the COVID-19 pandemic.
|
Informatics in medicine unlocked
| 2022-08-30T00:00:00
|
[
"Ahmad AlSmadi",
"AhedAbugabah",
"Ahmad MohammadAl-Smadi",
"SultanAlmotairi"
] |
10.1016/j.imu.2022.101059
10.1145/3447450.3447458
10.1109/ACCESS.2020.3010287
10.48550/ARXIV.1512.03385
10.48550/ARXIV.1801.04381
10.1109/ICCV.2019.00140
10.1109/CVPRW50498.2020.00183
|
HADCNet: Automatic segmentation of COVID-19 infection based on a hybrid attention dense connected network with dilated convolution.
|
the automatic segmentation of lung infections in CT slices provides a rapid and effective strategy for diagnosing, treating, and assessing COVID-19 cases. However, the segmentation of the infected areas presents several difficulties, including high intraclass variability and interclass similarity among infected areas, as well as blurred edges and low contrast. Therefore, we propose HADCNet, a deep learning framework that segments lung infections based on a dual hybrid attention strategy. HADCNet uses an encoder hybrid attention module to integrate feature information at different scales across the peer hierarchy to refine the feature map. Furthermore, a decoder hybrid attention module uses an improved skip connection to embed the semantic information of higher-level features into lower-level features by integrating multi-scale contextual structures and assigning the spatial information of lower-level features to higher-level features, thereby capturing the contextual dependencies of lesion features across levels and refining the semantic structure, which reduces the semantic gap between feature maps at different levels and improves the model segmentation performance. We conducted fivefold cross-validations of our model on four publicly available datasets, with final mean Dice scores of 0.792, 0.796, 0.785, and 0.723. These results show that the proposed model outperforms popular state-of-the-art semantic segmentation methods and indicate its potential use in the diagnosis and treatment of COVID-19.
|
Computers in biology and medicine
| 2022-08-28T00:00:00
|
[
"YingChen",
"TaohuiZhou",
"YiChen",
"LongfengFeng",
"ChengZheng",
"LanLiu",
"LipingHu",
"BujianPan"
] |
10.1016/j.compbiomed.2022.105981
|
Novel Coronavirus and Common Pneumonia Detection from CT Scans Using Deep Learning-Based Extracted Features.
|
COVID-19 which was announced as a pandemic on 11 March 2020, is still infecting millions to date as the vaccines that have been developed do not prevent the disease but rather reduce the severity of the symptoms. Until a vaccine is developed that can prevent COVID-19 infection, the testing of individuals will be a continuous process. Medical personnel monitor and treat all health conditions; hence, the time-consuming process to monitor and test all individuals for COVID-19 becomes an impossible task, especially as COVID-19 shares similar symptoms with the common cold and pneumonia. Some off-the-counter tests have been developed and sold, but they are unreliable and add an additional burden because false-positive cases have to visit hospitals and perform specialized diagnostic tests to confirm the diagnosis. Therefore, the need for systems that can automatically detect and diagnose COVID-19 automatically without human intervention is still an urgent priority and will remain so because the same technology can be used for future pandemics and other health conditions. In this paper, we propose a modified machine learning (ML) process that integrates deep learning (DL) algorithms for feature extraction and well-known classifiers that can accurately detect and diagnose COVID-19 from chest CT scans. Publicly available datasets were made available by the China Consortium for Chest CT Image Investigation (CC-CCII). The highest average accuracy obtained was 99.9% using the modified ML process when 2000 features were extracted using GoogleNet and ResNet18 and using the support vector machine (SVM) classifier. The results obtained using the modified ML process were higher when compared to similar methods reported in the extant literature using the same datasets or different datasets of similar size; thus, this study is considered of added value to the current body of knowledge. Further research in this field is required to develop methods that can be applied in hospitals and can better equip mankind to be prepared for any future pandemics.
|
Viruses
| 2022-08-27T00:00:00
|
[
"GhazanfarLatif",
"HamdyMorsy",
"AsmaaHassan",
"JaafarAlghazo"
] |
10.3390/v14081667
10.1007/s10044-021-00984-y
10.1016/j.idm.2020.02.002
10.3201/eid1212.060401
10.1136/bmj.m641
10.1001/jama.2020.2565
10.1016/j.dsx.2020.04.012
10.1109/JBHI.2020.3037127
10.1016/j.media.2020.101797
10.1007/s10489-020-02029-z
10.1016/j.susoc.2021.08.001
10.1109/ACCESS.2020.3016780
10.1016/j.eswa.2020.114054
10.1109/ACCESS.2020.2994762
10.1016/j.chaos.2020.110495
10.1007/s10489-020-01902-1
10.3390/s21020455
10.3390/s21041480
10.3390/diagnostics11111972
10.1016/j.compbiomed.2022.105233
10.1007/s11042-022-11913-4
10.2174/1573405614666180402150218
10.1016/j.procs.2019.12.110
10.1155/2022/2665283
10.3390/math10050796
10.1002/mp.14609
10.3390/diagnostics11071155
10.1016/j.jcp.2020.110010
10.3233/JIFS-189132
10.1109/ictcs.2019.8923092
10.3390/app10134523
10.1007/s40846-021-00656-6
10.2139/ssrn.3754116
10.4316/AECE.2014.01010
10.1145/3277104.3278311
10.3390/diagnostics12041018
10.1007/978-1-4471-7296-3_21
10.1016/j.cell.2020.04.045
10.1016/j.media.2020.101913
10.1002/mp.15044
10.1016/j.compbiomed.2021.104857
|
Vascular Implications of COVID-19: Role of Radiological Imaging, Artificial Intelligence, and Tissue Characterization: A Special Report.
|
The SARS-CoV-2 virus has caused a pandemic, infecting nearly 80 million people worldwide, with mortality exceeding six million. The average survival span is just 14 days from the time the symptoms become aggressive. The present study delineates the deep-driven vascular damage in the pulmonary, renal, coronary, and carotid vessels due to SARS-CoV-2. This special report addresses an important gap in the literature in understanding (i) the pathophysiology of vascular damage and the role of medical imaging in the visualization of the damage caused by SARS-CoV-2, and (ii) further understanding the severity of COVID-19 using artificial intelligence (AI)-based tissue characterization (TC). PRISMA was used to select 296 studies for AI-based TC. Radiological imaging techniques such as magnetic resonance imaging (MRI), computed tomography (CT), and ultrasound were selected for imaging of the vasculature infected by COVID-19. Four kinds of hypotheses are presented for showing the vascular damage in radiological images due to COVID-19. Three kinds of AI models, namely, machine learning, deep learning, and transfer learning, are used for TC. Further, the study presents recommendations for improving AI-based architectures for vascular studies. We conclude that the process of vascular damage due to COVID-19 has similarities across vessel types, even though it results in multi-organ dysfunction. Although the mortality rate is ~2% of those infected, the long-term effect of COVID-19 needs monitoring to avoid deaths. AI seems to be penetrating the health care industry at warp speed, and we expect to see an emerging role in patient care, reduce the mortality and morbidity rate.
|
Journal of cardiovascular development and disease
| 2022-08-26T00:00:00
|
[
"Narendra NKhanna",
"MaheshMaindarkar",
"AnudeepPuvvula",
"SudipPaul",
"MrinaliniBhagawati",
"PuneetAhluwalia",
"ZoltanRuzsa",
"AdityaSharma",
"SmikshaMunjral",
"RaghuKolluri",
"Padukone RKrishnan",
"Inder MSingh",
"John RLaird",
"MostafaFatemi",
"AzraAlizad",
"Surinder KDhanjil",
"LucaSaba",
"AntonellaBalestrieri",
"GavinoFaa",
"Kosmas IParaskevas",
"Durga PrasannaMisra",
"VikasAgarwal",
"AmanSharma",
"JagjitTeji",
"MustafaAl-Maini",
"AndrewNicolaides",
"VijayRathore",
"SubbaramNaidu",
"KieraLiblik",
"Amer MJohri",
"MonikaTurk",
"David WSobel",
"GyanPareek",
"MartinMiner",
"KlaudijaViskovic",
"GeorgeTsoulfas",
"Athanasios DProtogerou",
"SophieMavrogeni",
"George DKitas",
"Mostafa MFouda",
"Manudeep KKalra",
"Jasjit SSuri"
] |
10.3390/jcdd9080268
10.1007/s10072-021-05756-4
10.3233/JPD-202038
10.3389/fpsyt.2020.590134
10.1001/jama.2020.1585
10.1016/j.compbiomed.2020.103960
10.1186/s13054-020-03062-7
10.1161/CIRCULATIONAHA.112.093245
10.1161/01.ATV.0000051384.43104.FC
10.1016/S0140-6736(20)30937-5
10.1038/s41577-020-0343-0
10.1016/j.atherosclerosis.2020.10.014
10.1016/j.cell.2020.04.004
10.21037/cdt-20-561
10.1016/j.ejrad.2019.02.038
10.1007/s10916-017-0797-1
10.1016/j.compbiomed.2020.103804
10.1007/s10554-020-02099-7
10.1007/s00296-020-04691-5
10.4239/wjd.v12.i3.215
10.1007/s11517-018-1897-x
10.1016/j.ultras.2011.11.003
10.7863/ultra.33.2.245
10.1007/s10278-012-9553-8
10.3109/02652048.2013.879932
10.1016/j.cmpb.2017.07.011
10.1016/j.eswa.2015.03.014
10.1109/TIM.2011.2174897
10.1007/s10916-017-0745-0
10.1007/s11517-006-0119-0
10.1016/j.cmpb.2016.02.004
10.21037/cdt.2016.03.08
10.3390/diagnostics11122367
10.3390/jcm9072146
10.1007/s11517-012-1019-0
10.1016/j.cmpb.2012.09.008
10.1007/s11883-018-0736-8
10.3390/diagnostics11112109
10.1016/j.compbiomed.2021.105131
10.1007/s10916-021-01707-w
10.1109/JBHI.2021.3103839
10.1016/j.mehy.2020.109603
10.1007/s13721-017-0155-8
10.3389/fnagi.2021.633752
10.1016/j.virol.2015.03.043
10.1038/s41579-018-0118-9
10.1038/nrmicro2090
10.1001/jama.2020.6019
10.1038/d41573-020-00016-0
10.1056/NEJMoa2001282
10.1086/375233
10.1001/jama.2020.2648
10.1038/nri2171
10.1161/CIRCULATIONAHA.104.510461
10.1152/ajplung.00498.2016
10.1016/j.cell.2020.02.058
10.1038/s41598-022-07918-6
10.1177/03009858221079665
10.1007/s12250-020-00207-4
10.1128/JVI.01248-09
10.1002/rth2.12175
10.2147/COPD.S329783
10.1128/mBio.00638-15
10.1016/S0140-6736(20)30628-0
10.1002/jmv.23354
10.1007/s12035-021-02457-z
10.1002/path.1440
10.1002/jmv.25709
10.1056/NEJMoa2015432
10.1080/20009666.2021.1921908
10.1136/bmj-2021-069590
10.1186/s12959-020-00255-6
10.7326/L20-1275
10.1148/ryct.2020200277
10.1148/rg.282075705
10.1155/2022/4640788
10.1097/CM9.0000000000000774
10.1007/s11684-020-0754-0
10.1159/000342483
10.1038/s41467-021-22781-1
10.1016/j.kint.2020.04.003
10.1681/ASN.2020040419
10.2139/ssrn.3559601
10.1016/j.tcm.2020.10.005
10.1016/j.avsg.2020.07.013
10.1016/j.acra.2020.07.019
10.1186/s13054-020-02931-5
10.1007/s00330-020-07300-y
10.1016/j.ajem.2020.07.054
10.1093/eurheartj/ehab314
10.1016/j.ijcard.2020.04.028
10.1128/MMBR.05015-11
10.1016/j.tox.2022.153104
10.1080/15384101.2019.1662678
10.1016/j.ebiom.2020.102763
10.1002/jmv.25900
10.1016/j.ebiom.2020.102789
10.1038/s41584-020-0474-5
10.37899/journallamedihealtico.v3i3.647
10.1016/j.juro.2015.03.119
10.1016/j.idcr.2020.e00968
10.1093/ckj/sfaa141
10.1007/s00134-020-06153-9
10.1016/j.biopha.2021.111966
10.4103/iju.IJU_76_21
10.1016/j.xkme.2020.07.010
10.1097/SHK.0000000000001659
10.1016/j.clinimag.2020.11.011
10.1148/radiol.2020201623
10.4269/ajtmh.20-0869
10.1016/S0140-6736(20)30566-3
10.1016/j.jacc.2020.04.031
10.1016/j.immuni.2020.06.017
10.1016/S0304-4157(98)00018-5
10.1038/nri3345
10.2174/138920111798281171
10.12688/f1000research.9692.1
10.1111/j.1365-2362.2009.02153.x
10.1046/j.1523-1755.62.s82.4.x
10.3390/molecules27072048
10.1161/01.RES.84.9.1043
10.1111/ijlh.13829
10.1161/01.ATV.20.8.2019
10.1016/j.jacc.2020.06.080
10.1002/ccd.29056
10.1111/jocs.14538
10.1016/j.clinimag.2021.02.016
10.1007/s00259-021-05375-3
10.1186/s13244-021-00973-z
10.3174/ajnr.A6674
10.1161/CIRCULATIONAHA.120.047525
10.1016/j.wneu.2020.08.154
10.1016/j.neurad.2020.04.003
10.3389/fcvm.2021.671669
10.1016/j.clinimag.2020.07.007
10.1111/jon.12803
10.1155/2020/7397480
10.1016/j.jvs.2021.11.064
10.1016/j.cmpb.2017.12.016
10.1007/s10916-015-0214-6
10.7785/tcrt.2012.500381
10.1038/nature14539
10.1016/j.nicl.2022.103065
10.1016/j.cmpb.2021.106332
10.1016/j.ejrad.2020.109041
10.1007/s00330-020-07044-9
10.3348/kjr.2020.0536
10.1109/TMI.2020.2994459
10.1002/jmri.26887
10.1148/ryai.2020200048
10.1148/radiol.2020200905
10.1007/s10096-020-03901-z
10.21037/atm.2020.03.132
10.1109/ACCESS.2020.3005510
10.1016/j.cell.2020.04.045
10.1109/TUFFC.2020.3005512
10.1109/ACCESS.2020.3003810
10.1016/j.compbiomed.2020.103869
10.1016/j.cmpb.2020.105608
10.1080/07391102.2020.1788642
10.1111/nan.12667
10.3390/cancers14040987
10.1016/j.cmpb.2017.09.004
10.1002/mp.14193
10.1016/j.ekir.2017.11.002
10.1038/s41746-019-0104-2
10.1007/s11517-005-0016-y
10.7863/jum.2009.28.11.1561
10.1080/03772063.2019.1604176
10.1093/ajcn/65.4.1000
10.1007/s11883-016-0635-9
10.5853/jos.2017.02922
10.1016/j.cmpb.2013.07.012
10.1016/j.bspc.2013.08.008
10.1109/TMI.2016.2528162
10.1515/itms-2017-0007
10.1016/j.compbiomed.2020.103958
10.1016/j.cmpb.2012.05.008
10.1016/j.compbiomed.2018.05.014
10.1177/1544316718806421
10.1016/j.compbiomed.2020.103847
10.3390/diagnostics11122257
10.23736/S0392-9590.21.04771-4
10.1016/j.compbiomed.2016.11.011
10.1109/JBHI.2016.2631401
10.3934/mbe.2022229
10.1504/IJCSE.2022.120788
10.1016/j.jacr.2017.12.028
10.1016/j.ejrad.2017.01.031
10.21037/atm-20-7676
10.1007/s10554-020-02124-9
10.1109/TIM.2021.3052577
10.3390/jcm11133721
10.1049/el.2020.2102
10.3390/electronics11111800
10.1016/j.compbiomed.2022.105273
10.1109/RBME.2020.2990959
10.1016/j.preteyeres.2021.100965
10.1038/s41467-020-17971-2
10.1056/NEJMc2011400
10.1016/j.jacc.2020.05.001
10.1186/s12882-020-02150-8
10.7759/cureus.9540
10.1007/s13204-021-01868-7
10.1007/s11548-021-02317-0
10.3389/fphys.2022.832457
10.1002/cyto.a.24274
10.1038/s41598-019-54244-5
10.1148/radiol.2021211483
10.1016/j.jcmg.2021.10.013
10.1681/ASN.2020050589
10.1016/j.ijantimicag.2020.105949
10.1016/j.mayocp.2020.03.024
10.1148/radiol.2020203511
10.1186/s13054-020-03179-9
10.1055/a-1775-8633
10.1155/2021/6761364
10.1681/ASN.2020050597
10.1093/ejcts/ezac289
10.36660/abc.20200302
10.1016/j.compbiomed.2021.104721
10.1016/j.ejvs.2009.03.013
10.3348/kjr.2021.0148
10.1007/s40520-021-01985-x
10.1007/s00296-021-05062-4
10.1056/NEJM199412013312202
10.1016/j.ultrasmedbio.2014.12.024
10.1227/01.NEU.0000239895.00373.E4
10.1681/ASN.V92231
10.1165/rcmb.2007-0441OC
10.1097/MCA.0000000000000934
10.3390/biology11020221
10.2214/AJR.11.6955
10.1097/MCA.0000000000000914
10.1097/CCM.0000000000004899
10.1002/14651858.CD003186.pub3
10.3390/diagnostics12010166
10.1016/j.bspc.2016.03.001
10.1016/j.compbiomed.2021.105204
10.1109/TIM.2022.3174270
10.1016/S0140-6736(96)07492-2
10.1016/j.compbiomed.2022.105571
10.3390/app10238623
10.1016/j.clim.2020.108509
10.1016/j.jpeds.2020.07.039
10.1007/BF01907940
10.1200/GO.20.00064
10.1016/j.irbm.2020.05.003
10.2214/AJR.20.23034
10.1007/s11604-021-01120-w
10.1148/radiol.2020200343
10.1016/j.asoc.2020.106912
10.1109/TIP.2021.3058783
10.1093/eurheartj/ehaa399
|
Artificial intelligence model on chest imaging to diagnose COVID-19 and other pneumonias: A systematic review and meta-analysis.
|
When diagnosing Coronavirus disease 2019(COVID-19), radiologists cannot make an accurate judgments because the image characteristics of COVID-19 and other pneumonia are similar. As machine learning advances, artificial intelligence(AI) models show promise in diagnosing COVID-19 and other pneumonias. We performed a systematic review and meta-analysis to assess the diagnostic accuracy and methodological quality of the models.
We searched PubMed, Cochrane Library, Web of Science, and Embase, preprints from medRxiv and bioRxiv to locate studies published before December 2021, with no language restrictions. And a quality assessment (QUADAS-2), Radiomics Quality Score (RQS) tools and CLAIM checklist were used to assess the quality of each study. We used random-effects models to calculate pooled sensitivity and specificity, I
We screened 32 studies from the 2001 retrieved articles for inclusion in the meta-analysis. We included 6737 participants in the test or validation group. The meta-analysis revealed that AI models based on chest imaging distinguishes COVID-19 from other pneumonias: pooled area under the curve (AUC) 0.96 (95 % CI, 0.94-0.98), sensitivity 0.92 (95 % CI, 0.88-0.94), pooled specificity 0.91 (95 % CI, 0.87-0.93). The average RQS score of 13 studies using radiomics was 7.8, accounting for 22 % of the total score. The 19 studies using deep learning methods had an average CLAIM score of 20, slightly less than half (48.24 %) the ideal score of 42.00.
The AI model for chest imaging could well diagnose COVID-19 and other pneumonias. However, it has not been implemented as a clinical decision-making tool. Future researchers should pay more attention to the quality of research methodology and further improve the generalizability of the developed predictive models.
|
European journal of radiology open
| 2022-08-24T00:00:00
|
[
"Lu-LuJia",
"Jian-XinZhao",
"Ni-NiPan",
"Liu-YanShi",
"Lian-PingZhao",
"Jin-HuiTian",
"GangHuang"
] |
10.1016/j.ejro.2022.100438
|
A dual-stage deep convolutional neural network for automatic diagnosis of COVID-19 and pneumonia from chest CT images.
|
In the Coronavirus disease-2019 (COVID-19) pandemic, for fast and accurate diagnosis of a large number of patients, besides traditional methods, automated diagnostic tools are now extremely required. In this paper, a deep convolutional neural network (CNN) based scheme is proposed for automated accurate diagnosis of COVID-19 from lung computed tomography (CT) scan images. First, for the automated segmentation of lung regions in a chest CT scan, a modified CNN architecture, namely SKICU-Net is proposed by incorporating additional skip interconnections in the U-Net model that overcome the loss of information in dimension scaling. Next, an agglomerative hierarchical clustering is deployed to eliminate the CT slices without significant information. Finally, for effective feature extraction and diagnosis of COVID-19 and pneumonia from the segmented lung slices, a modified DenseNet architecture, namely P-DenseCOVNet is designed where parallel convolutional paths are introduced on top of the conventional DenseNet model for getting better performance through overcoming the loss of positional arguments. Outstanding performances have been achieved with an F
|
Computers in biology and medicine
| 2022-08-23T00:00:00
|
[
"FarhanSadik",
"Ankan GhoshDastider",
"Mohseu RashidSubah",
"TanvirMahmud",
"Shaikh AnowarulFattah"
] |
10.1016/j.compbiomed.2022.105806
10.1101/2020.02.14.20023028
|
Multicenter Study on COVID-19 Lung Computed Tomography Segmentation with varying Glass Ground Opacities using Unseen Deep Learning Artificial Intelligence Paradigms: COVLIAS 1.0 Validation.
|
Variations in COVID-19 lesions such as glass ground opacities (GGO), consolidations, and crazy paving can compromise the ability of solo-deep learning (SDL) or hybrid-deep learning (HDL) artificial intelligence (AI) models in predicting automated COVID-19 lung segmentation in Computed Tomography (CT) from unseen data leading to poor clinical manifestations. As the first study of its kind, "COVLIAS 1.0-Unseen" proves two hypotheses, (i) contrast adjustment is vital for AI, and (ii) HDL is superior to SDL. In a multicenter study, 10,000 CT slices were collected from 72 Italian (ITA) patients with low-GGO, and 80 Croatian (CRO) patients with high-GGO. Hounsfield Units (HU) were automatically adjusted to train the AI models and predict from test data, leading to four combinations-two Unseen sets: (i) train-CRO:test-ITA, (ii) train-ITA:test-CRO, and two Seen sets: (iii) train-CRO:test-CRO, (iv) train-ITA:test-ITA. COVILAS used three SDL models: PSPNet, SegNet, UNet and six HDL models: VGG-PSPNet, VGG-SegNet, VGG-UNet, ResNet-PSPNet, ResNet-SegNet, and ResNet-UNet. Two trained, blinded senior radiologists conducted ground truth annotations. Five types of performance metrics were used to validate COVLIAS 1.0-Unseen which was further benchmarked against MedSeg, an open-source web-based system. After HU adjustment for DS and JI, HDL (Unseen AI) > SDL (Unseen AI) by 4% and 5%, respectively. For CC, HDL (Unseen AI) > SDL (Unseen AI) by 6%. The COVLIAS-MedSeg difference was < 5%, meeting regulatory guidelines.Unseen AI was successfully demonstrated using automated HU adjustment. HDL was found to be superior to SDL.
|
Journal of medical systems
| 2022-08-22T00:00:00
|
[
"Jasjit SSuri",
"SushantAgarwal",
"LucaSaba",
"Gian LucaChabert",
"AlessandroCarriero",
"AlessioPaschè",
"PietroDanna",
"ArminMehmedović",
"GavinoFaa",
"TanayJujaray",
"Inder MSingh",
"Narendra NKhanna",
"John RLaird",
"Petros PSfikakis",
"VikasAgarwal",
"Jagjit STeji",
"RajanikantR Yadav",
"FerencNagy",
"Zsigmond TamásKincses",
"ZoltanRuzsa",
"KlaudijaViskovic",
"Mannudeep KKalra"
] |
10.1007/s10916-022-01850-y
10.23750/abm.v91i1.9397
10.26355/eurrev_202012_24058
10.1016/j.compbiomed.2020.103960
10.1007/s10554-020-02089-9
10.4239/wjd.v12.i3.215
10.26355/eurrev_202108_26464
10.1016/j.clinimag.2021.05.016
10.23750/abm.v92i5.10418
10.52586/5026
10.1148/radiol.2020200432
10.1016/j.ejrad.2020.109041
10.1007/s00330-020-06920-8
10.21037/atm-2020-cass-13
10.1016/j.neurad.2017.09.007
10.21037/atm-20-7676
10.5152/dir.2020.20304
10.1007/s00330-020-06915-5
10.1007/s13244-010-0060-5
10.1080/07853890.2020.1851044
10.2214/AJR.20.23034
10.1148/radiol.2020200343
10.1007/s11548-020-02286-w
10.1007/s11548-021-02317-0
10.1007/s10916-021-01707-w
10.3390/diagnostics12010166
10.1109/JBHI.2021.3103839
10.3390/diagnostics11122257
10.1016/j.wneu.2016.05.069
10.1016/j.ejmp.2017.11.036
10.1016/j.compbiomed.2021.104721
10.1016/j.compbiomed.2021.104803
10.23736/S0392-9590.21.04771-4
10.1016/j.compbiomed.2021.105131
10.1109/TMI.2020.3002417
10.11613/BM.2015.015
10.1080/10408340500526766
10.1177/875647939000600106
10.2741/4725
10.1016/j.ejrad.2019.02.038
10.1007/s10916-018-0940-7
10.1016/j.cmpb.2017.09.004
10.1016/j.cmpb.2019.04.008
10.1007/s10916-010-9645-2
10.1016/j.cmpb.2012.09.008
10.1007/s11517-012-1019-0
10.1177/0954411913483637
10.1016/j.cmpb.2017.12.016
10.1016/j.compbiomed.2017.10.019
10.7785/tcrt.2012.500346
10.1016/j.compbiomed.2015.07.021
10.1016/j.cmpb.2017.07.011
10.1007/s11517-021-02322-0
10.1007/s00330-020-06829-2
10.1109/TNNLS.2021.3054746
10.1109/TMI.2019.2959609
10.1186/s12880-020-00529-5
10.1016/j.acra.2020.09.004
10.1109/TPAMI.2016.2644615
10.1006/jmps.1999.1279
10.1007/978-0-387-39940-9_565
10.1109/TIM.2011.2174897
10.1007/s10916-015-0214-6
10.1016/j.cmpb.2016.02.004
10.1007/s10916-017-0797-1
10.1016/j.eswa.2015.03.014
10.1055/s-0032-1330336
10.1016/j.bspc.2016.03.001
10.1109/4233.992158
10.31083/j.rcm.2020.04.236
|
Psoas muscle metastatic disease mimicking a psoas abscess on imaging.
|
Here, we report a case of malignant psoas syndrome presented to us during the second peak of the COVID-19 pandemic. Our patient had a medical history of hypertension, recently diagnosed with left iliac deep vein thrombosis and previous breast and endometrial cancers. She presented with exquisite pain and a fixed flexion deformity of the left hip. A rim-enhancing lesion was seen within the left psoas muscle and was initially deemed to be a psoas abscess. This failed to respond to medical management and attempts at drainage. Subsequent further imaging revealed the mass was of a malignant nature; histology revealing a probable carcinomatous origin. Following diagnosis, palliative input was obtained and, unfortunately, our patient passed away in a hospice shortly after discharge. We discuss the aetiology, radiological findings and potential treatments of this condition and learning points to prompt clinicians to consider this diagnosis in those with a personal history of cancer.
|
BMJ case reports
| 2022-08-20T00:00:00
|
[
"ChristopherGunn",
"MazyarFani"
] |
10.1136/bcr-2022-250654
10.1186/1470-7330-14-21
10.1007/s00330-009-1577-1
10.1007/s12094-011-0625-x
10.3892/mco.2018.1635
10.1016/j.jpainsymman.2003.12.018
10.1089/jpm.2014.0387
10.4103/IJPC.IJPC_205_19
10.1080/15360288.2017.1301617
10.1016/S0304-3959(99)00039-1
10.1111/papr.12643
10.1159/000360581
10.2214/ajr.174.2.1740401
10.1111/j.1440-1673.1990.tb02831.x
10.11604/pamj.2020.36.231.21137
10.1016/j.gore.2021.100814
10.1016/j.spinee.2015.08.001
10.1136/bcr-2017-223916
10.1102/1470-7330.2013.0011
10.1159/000227588
10.1016/j.ygyno.2006.02.011
10.3233/BD-140384
10.1007/s002560050141
|
Rapid tissue prototyping with micro-organospheres.
|
In vitro tissue models hold great promise for modeling diseases and drug responses. Here, we used emulsion microfluidics to form micro-organospheres (MOSs), which are droplet-encapsulated miniature three-dimensional (3D) tissue models that can be established rapidly from patient tissues or cells. MOSs retain key biological features and responses to chemo-, targeted, and radiation therapies compared with organoids. The small size and large surface-to-volume ratio of MOSs enable various applications including quantitative assessment of nutrient dependence, pathogen-host interaction for anti-viral drug screening, and a rapid potency assay for chimeric antigen receptor (CAR)-T therapy. An automated MOS imaging pipeline combined with machine learning overcomes plating variation, distinguishes tumorspheres from stroma, differentiates cytostatic versus cytotoxic drug effects, and captures resistant clones and heterogeneity in drug response. This pipeline is capable of robust assessments of drug response at individual-tumorsphere resolution and provides a rapid and high-throughput therapeutic profiling platform for precision medicine.
|
Stem cell reports
| 2022-08-20T00:00:00
|
[
"ZhaohuiWang",
"MatteoBoretto",
"RosemaryMillen",
"NaveenNatesh",
"Elena SReckzeh",
"CarolynHsu",
"MarcosNegrete",
"HaipeiYao",
"WilliamQuayle",
"Brook EHeaton",
"Alfred THarding",
"ShreeBose",
"ElseDriehuis",
"JoepBeumer",
"Grecia ORivera",
"Ravian Lvan Ineveld",
"DonaldGex",
"JessicaDeVilla",
"DaisongWang",
"JensPuschhof",
"Maarten HGeurts",
"AthenaYeung",
"CaitHamele",
"AmberSmith",
"EricBankaitis",
"KunXiang",
"ShengliDing",
"DanielNelson",
"DanielDelubac",
"AnneRios",
"RalphAbi-Hachem",
"DavidJang",
"Bradley JGoldstein",
"CarolynGlass",
"Nicholas SHeaton",
"DavidHsu",
"HansClevers",
"XilingShen"
] |
10.1016/j.stemcr.2022.07.016
10.1016/j.copbio.2015.05.003
10.1038/s41556-020-0472-5
10.1038/s41556-018-0143-y
10.1038/s41556-019-0360-z
10.1016/j.medj.2021.08.005
10.1038/s41551-020-0565-2
10.1038/s41596-019-0232-9
10.1038/nm.3388
10.1016/j.cell.2018.07.009
10.1016/j.stem.2022.04.006
10.1158/2159-8290.CD-18-1522
10.3390/jcm8111880
10.1038/nature14415
10.1002/advs.202102418
10.1038/s41591-019-0584-2
10.1038/ng.3127
10.1038/s41586-020-2575-3
10.1016/j.jtbi.2011.03.026
10.1016/j.xcrm.2020.100161
10.1063/1.4995479
10.1158/1078-0432.CCR-20-5026
10.48550/arXiv.1912.08193
10.1016/j.ccell.2021.12.004
10.1002/advs.201903739
10.1038/nprot.2013.046
10.1158/2326-6066.CIR-18-0428
10.1158/1078-0432.CCR-20-0073
10.1016/j.cell.2018.11.021
10.1126/scitranslmed.aay2574
10.1016/j.cell.2017.11.010
10.15252/embj.2018100300
10.1053/j.gastro.2011.07.050
10.1016/j.stemcr.2021.04.009
10.15252/embj.2018100928
10.1016/j.isci.2020.101372
10.1158/2159-8290.CD-18-0349
10.1016/j.celrep.2020.107670
10.1016/j.cell.2015.03.053
10.1126/science.aao2774
10.1039/d0bm01085e
10.1016/j.stem.2019.10.010
|
DAFLNet: Dual Asymmetric Feature Learning Network for COVID-19 Disease Diagnosis in X-Rays.
|
COVID-19 has become the largest public health event worldwide since its outbreak, and early detection is a prerequisite for effective treatment. Chest X-ray images have become an important basis for screening and monitoring the disease, and deep learning has shown great potential for this task. Many studies have proposed deep learning methods for automated diagnosis of COVID-19. Although these methods have achieved excellent performance in terms of detection, most have been evaluated using limited datasets and typically use a single deep learning network to extract features. To this end, the dual asymmetric feature learning network (DAFLNet) is proposed, which is divided into two modules, DAFFM and WDFM. DAFFM mainly comprises the backbone networks EfficientNetV2 and DenseNet for feature fusion. WDFM is mainly for weighted decision-level fusion and features a new pretrained network selection algorithm (PNSA) for determination of the optimal weights. Experiments on a large dataset were conducted using two schemes, DAFLNet-1 and DAFLNet-2, and both schemes outperformed eight state-of-the-art classification techniques in terms of classification performance. DAFLNet-1 achieved an average accuracy of up to 98.56% for the triple classification of COVID-19, pneumonia, and healthy images.
|
Computational and mathematical methods in medicine
| 2022-08-20T00:00:00
|
[
"JingyaoLiu",
"JiashiZhao",
"LiyuanZhang",
"YuMiao",
"WeiHe",
"WeiliShi",
"YanfangLi",
"BaiJi",
"KeZhang",
"ZhengangJiang"
] |
10.1155/2022/3836498
10.1016/j.cmpb.2020.105532
10.1148/radiol.2020200642
10.1016/j.inffus.2020.10.004
10.22266/ijies2016.1231.24
10.3390/diagnostics10060358
10.1016/j.chaos.2020.110190
10.1016/j.crad.2020.03.004
10.1016/j.imu.2020.100360
10.1016/j.compeleceng.2020.106765
10.1007/s12559-020-09776-8
10.1016/j.ins.2020.09.041
10.1016/j.compbiomed.2020.103792
10.1016/j.bspc.2019.04.031
10.1016/j.inffus.2020.11.005
10.1016/j.compbiomed.2020.103805
10.1016/j.bspc.2020.102257
10.1016/j.bspc.2022.103677
10.1007/978-3-030-01234-2_1
10.1186/s13040-021-00244-z
10.1016/j.eswa.2022.116540
10.1007/s11548-020-02283-z
10.2196/19569
10.1007/978-3-319-46493-0_38
|
MFL-Net: An Efficient Lightweight Multi-Scale Feature Learning CNN for COVID-19 Diagnosis From CT Images.
|
Timely and accurate diagnosis of coronavirus disease 2019 (COVID-19) is crucial in curbing its spread. Slow testing results of reverse transcription-polymerase chain reaction (RT-PCR) and a shortage of test kits have led to consider chest computed tomography (CT) as an alternative screening and diagnostic tool. Many deep learning methods, especially convolutional neural networks (CNNs), have been developed to detect COVID-19 cases from chest CT scans. Most of these models demand a vast number of parameters which often suffer from overfitting in the presence of limited training data. Moreover, the linearly stacked single-branched architecture based models hamper the extraction of multi-scale features, reducing the detection performance. In this paper, to handle these issues, we propose an extremely lightweight CNN with multi-scale feature learning blocks called as MFL-Net. The MFL-Net comprises a sequence of MFL blocks that combines multiple convolutional layers with 3 ×3 filters and residual connections effectively, thereby extracting multi-scale features at different levels and preserving them throughout the block. The model has only 0.78M parameters and requires low computational cost and memory space compared to many ImageNet pretrained CNN architectures. Comprehensive experiments are carried out using two publicly available COVID-19 CT imaging datasets. The results demonstrate that the proposed model achieves higher performance than pretrained CNN models and state-of-the-art methods on both datasets with limited training data despite having an extremely lightweight architecture. The proposed method proves to be an effective aid for the healthcare system in the accurate and timely diagnosis of COVID-19.
|
IEEE journal of biomedical and health informatics
| 2022-08-19T00:00:00
|
[
"Amogh ManojJoshi",
"Deepak RanjanNayak"
] |
10.1109/JBHI.2022.3196489
|
Semi-supervised COVID-19 CT image segmentation using deep generative models.
|
A recurring problem in image segmentation is a lack of labelled data. This problem is especially acute in the segmentation of lung computed tomography (CT) of patients with Coronavirus Disease 2019 (COVID-19). The reason for this is simple: the disease has not been prevalent long enough to generate a great number of labels. Semi-supervised learning promises a way to learn from data that is unlabelled and has seen tremendous advancements in recent years. However, due to the complexity of its label space, those advancements cannot be applied to image segmentation. That being said, it is this same complexity that makes it extremely expensive to obtain pixel-level labels, making semi-supervised learning all the more appealing. This study seeks to bridge this gap by proposing a novel model that utilizes the image segmentation abilities of deep convolution networks and the semi-supervised learning abilities of generative models for chest CT images of patients with the COVID-19.
We propose a novel generative model called the shared variational autoencoder (SVAE). The SVAE utilizes a five-layer deep hierarchy of latent variables and deep convolutional mappings between them, resulting in a generative model that is well suited for lung CT images. Then, we add a novel component to the final layer of the SVAE which forces the model to reconstruct the input image using a segmentation that must match the ground truth segmentation whenever it is present. We name this final model StitchNet.
We compare StitchNet to other image segmentation models on a high-quality dataset of CT images from COVID-19 patients. We show that our model has comparable performance to the other segmentation models. We also explore the potential limitations and advantages in our proposed algorithm and propose some potential future research directions for this challenging issue.
|
BMC bioinformatics
| 2022-08-17T00:00:00
|
[
"JudahZammit",
"Daryl L XFung",
"QianLiu",
"Carson Kai-SangLeung",
"PingzhaoHu"
] |
10.1186/s12859-022-04878-6
10.1016/j.cell.2020.04.045
10.1109/TPAMI.2016.2644615
10.1109/TPAMI.2019.2960224
10.1016/j.patcog.2020.107269
10.1186/s12967-021-02992-2
|
CovMnet-Deep Learning Model for classifying Coronavirus (COVID-19).
|
Diagnosing COVID-19, current pandemic disease using Chest X-ray images is widely used to evaluate the lung disorders. As the spread of the disease is enormous many medical camps are being conducted to screen the patients and Chest X-ray is a simple imaging modality to detect presence of lung disorders. Manual lung disorder detection using Chest X-ray by radiologist is a tedious process and may lead to inter and intra-rate errors. Various deep convolution neural network techniques were tested for detecting COVID-19 abnormalities in lungs using Chest X-ray images. This paper proposes deep learning model to classify COVID-19 and normal chest X-ray images. Experiments are carried out for deep feature extraction, fine-tuning of convolutional neural networks (CNN) hyper parameters, and end-to-end training of four variants of the CNN model. The proposed CovMnet provide better classification accuracy of 97.4% for COVID-19 /normal than those reported in the previous studies. The proposed CovMnet model has potential to aid radiologist to monitor COVID-19 disease and proves to be an efficient non-invasive COVID-19 diagnostic tool for lung disorders.
|
Health and technology
| 2022-08-16T00:00:00
|
[
"MalathyJawahar",
"Jani AnbarasiL",
"VinayakumarRavi",
"JPrassanna",
"S GracelineJasmine",
"RManikandan",
"RamesSekaran",
"SuthendranKannan"
] |
10.1007/s12553-022-00688-1
10.1016/j.chemolab.2020.104054
10.1007/s10044-021-00984-y
10.1007/s13246-020-00865-4
10.1016/j.ins.2020.09.041
10.1016/j.chaos.2020.109949
10.1016/j.chaos.2020.110242
10.4018/IJSSCI.2020070102
10.4249/scholarpedia.1717
10.1113/jphysiol.1970.sp009022
10.1007/BF00344251
10.1007/s00521-018-3761-1
10.1007/s10096-020-03901-z
|
Multiclass Classification for Detection of COVID-19 Infection in Chest X-Rays Using CNN.
|
Coronavirus took the world by surprise and caused a lot of trouble in all the important fields in life. The complexity of dealing with coronavirus lies in the fact that it is highly infectious and is a novel virus which is hard to detect with exact precision. The typical detection method for COVID-19 infection is the RT-PCR but it is a rather expensive method which is also invasive and has a high margin of error. Radiographies are a good alternative for COVID-19 detection given the experience of the radiologist and his learning capabilities. To make an accurate detection from chest X-Rays, deep learning technologies can be involved to analyze the radiographs, learn distinctive patterns of coronavirus' presence, find these patterns in the tested radiograph, and determine whether the sample is actually COVID-19 positive or negative. In this study, we propose a model based on deep learning technology using Convolutional Neural Networks and training it on a dataset containing a total of over 35,000 chest X-Ray images, nearly 16,000 for COVID-19 positive images, 15,000 for normal images, and 5,000 for pneumonia-positive images. The model's performance was assessed in terms of accuracy, precision, recall, and
|
Computational intelligence and neuroscience
| 2022-08-16T00:00:00
|
[
"Rawan SaqerAlharbi",
"Hadeel AysanAlsaadi",
"SManimurugan",
"TAnitha",
"MiniluDejene"
] |
10.1155/2022/3289809
10.1016/j.chaos.2020.110495
10.1016/j.bspc.2022.103561
10.1007/s13755-020-00135-3
10.3390/ijerph18063056
10.1007/978-3-642-15825-4_10
10.3390/s2203121
10.1007/s10489-020-01978-9
10.1016/j.neucom.2021.03.034
10.1016/j.bspc.2021.102920
|
Innovations in thoracic imaging: CT, radiomics, AI and x-ray velocimetry.
|
In recent years, pulmonary imaging has seen enormous progress, with the introduction, validation and implementation of new hardware and software. There is a general trend from mere visual evaluation of radiological images to quantification of abnormalities and biomarkers, and assessment of 'non visual' markers that contribute to establishing diagnosis or prognosis. Important catalysts to these developments in thoracic imaging include new indications (like computed tomography [CT] lung cancer screening) and the COVID-19 pandemic. This review focuses on developments in CT, radiomics, artificial intelligence (AI) and x-ray velocimetry for imaging of the lungs. Recent developments in CT include the potential for ultra-low-dose CT imaging for lung nodules, and the advent of a new generation of CT systems based on photon-counting detector technology. Radiomics has demonstrated potential towards predictive and prognostic tasks particularly in lung cancer, previously not achievable by visual inspection by radiologists, exploiting high dimensional patterns (mostly texture related) on medical imaging data. Deep learning technology has revolutionized the field of AI and as a result, performance of AI algorithms is approaching human performance for an increasing number of specific tasks. X-ray velocimetry integrates x-ray (fluoroscopic) imaging with unique image processing to produce quantitative four dimensional measurement of lung tissue motion, and accurate calculations of lung ventilation.
|
Respirology (Carlton, Vic.)
| 2022-08-16T00:00:00
|
[
"RozemarijnVliegenthart",
"AndreasFouras",
"ColinJacobs",
"NickolasPapanikolaou"
] |
10.1111/resp.14344
10.1097/RLI.0000000000000822
10.1148/radiol.210551
10.1007/s00247-021-05146-0
10.1109/CVPR.2017.369
|
Reinforcement Learning Based Diagnosis and Prediction for COVID-19 by Optimizing a Mixed Cost Function From CT Images.
|
A novel coronavirus disease (COVID-19) is a pandemic disease has caused 4 million deaths and more than 200 million infections worldwide (as of August 4, 2021). Rapid and accurate diagnosis of COVID-19 infection is critical to controlling the spread of the epidemic. In order to quickly and efficiently detect COVID-19 and reduce the threat of COVID-19 to human survival, we have firstly proposed a detection framework based on reinforcement learning for COVID-19 diagnosis, which constructs a mixed loss function that can integrate the advantages of multiple loss functions. This paper uses the accuracy of the validation set as the reward value, and obtains the initial model for the next epoch by searching the model corresponding to the maximum reward value in each epoch. We also have proposed a prediction framework that integrates multiple detection frameworks using parameter sharing to predict the progression of patients' disease without additional training. This paper also constructed a higher-quality version of the CT image dataset containing 247 cases screened by professional physicians, and obtained more excellent results on this dataset. Meanwhile, we used the other two COVID-19 datasets as external verifications, and still achieved a high accuracy rate without additional training. Finally, the experimental results show that our classification accuracy can reach 98.31%, and the precision, sensitivity, specificity, and AUC (Area Under Curve) are 98.82%, 97.99%, 98.67%, and 0.989, respectively. The accuracy of external verification can reach 93.34% and 91.05%. What's more, the accuracy of our prediction framework is 91.54%. A large number of experiments demonstrate that our proposed method is effective and robust for COVID-19 detection and prediction.
|
IEEE journal of biomedical and health informatics
| 2022-08-12T00:00:00
|
[
"SiyingChen",
"MinghuiLiu",
"PanDeng",
"JialiDeng",
"YiYuan",
"XuanCheng",
"TianshuXie",
"LiboXie",
"WeiZhang",
"HaigangGong",
"XiaominWang",
"LifengXu",
"HongPu",
"MingLiu"
] |
10.1109/JBHI.2022.3197666
|
Detection of COVID-19 from chest X-ray images: Boosting the performance with convolutional neural network and transfer learning.
|
Coronavirus disease (COVID-19) is a pandemic that has caused thousands of casualties and impacts all over the world. Most countries are facing a shortage of COVID-19 test kits in hospitals due to the daily increase in the number of cases. Early detection of COVID-19 can protect people from severe infection. Unfortunately, COVID-19 can be misdiagnosed as pneumonia or other illness and can lead to patient death. Therefore, in order to avoid the spread of COVID-19 among the population, it is necessary to implement an automated early diagnostic system as a rapid alternative diagnostic system. Several researchers have done very well in detecting COVID-19; however, most of them have lower accuracy and overfitting issues that make early screening of COVID-19 difficult. Transfer learning is the most successful technique to solve this problem with higher accuracy. In this paper, we studied the feasibility of applying transfer learning and added our own classifier to automatically classify COVID-19 because transfer learning is very suitable for medical imaging due to the limited availability of data. In this work, we proposed a CNN model based on deep transfer learning technique using six different pre-trained architectures, including VGG16, DenseNet201, MobileNetV2, ResNet50, Xception, and EfficientNetB0. A total of 3886 chest X-rays (1200 cases of COVID-19, 1341 healthy and 1345 cases of viral pneumonia) were used to study the effectiveness of the proposed CNN model. A comparative analysis of the proposed CNN models using three classes of chest X-ray datasets was carried out in order to find the most suitable model. Experimental results show that the proposed CNN model based on VGG16 was able to accurately diagnose COVID-19 patients with 97.84% accuracy, 97.90% precision, 97.89% sensitivity, and 97.89% of
|
Expert systems
| 2022-08-11T00:00:00
|
[
"SohaibAsif",
"YiWenhui",
"KamranAmjad",
"HouJin",
"YiTao",
"SiJinhai"
] |
10.1111/exsy.13099
10.1080/07391102.2020.1767212
10.20944/preprints202003.0300.v1
|
A modified DeepLabV3+ based semantic segmentation of chest computed tomography images for COVID-19 lung infections.
|
Coronavirus disease (COVID-19) affects the lives of billions of people worldwide and has destructive impacts on daily life routines, the global economy, and public health. Early diagnosis and quantification of COVID-19 infection have a vital role in improving treatment outcomes and interrupting transmission. For this purpose, advances in medical imaging techniques like computed tomography (CT) scans offer great potential as an alternative to RT-PCR assay. CT scans enable a better understanding of infection morphology and tracking of lesion boundaries. Since manual analysis of CT can be extremely tedious and time-consuming, robust automated image segmentation is necessary for clinical diagnosis and decision support. This paper proposes an efficient segmentation framework based on the modified DeepLabV3+ using lower atrous rates in the Atrous Spatial Pyramid Pooling (ASPP) module. The lower atrous rates make receptive small to capture intricate morphological details. The encoder part of the framework utilizes a pre-trained residual network based on dilated convolutions for optimum resolution of feature maps. In order to evaluate the robustness of the modified model, a comprehensive comparison with other state-of-the-art segmentation methods was also performed. The experiments were carried out using a fivefold cross-validation technique on a publicly available database containing 100 single-slice CT scans from >40 patients with COVID-19. The modified DeepLabV3+ achieved good segmentation performance using around 43.9 M parameters. The lower atrous rates in the ASPP module improved segmentation performance. After fivefold cross-validation, the framework achieved an overall Dice similarity coefficient score of 0.881. The results demonstrate that several minor modifications to the DeepLabV3+ pipeline can provide robust solutions for improving segmentation performance and hardware implementation.
|
International journal of imaging systems and technology
| 2022-08-10T00:00:00
|
[
"HasanPolat"
] |
10.1002/ima.22772
10.1002/ima.22566
10.1002/ima.22525
10.1016/j.measurement.2020.108288
10.1016/j.mehy.2020.109761
10.1148/radiol.2020200642
10.1016/j.aej.2020.10.046
10.1016/j.media.2017.07.005
10.1016/j.tmaid.2020.101623
10.1016/j.jrid.2020.04.001
10.1111/exsy.12742
10.1049/iet-cvi.2018.5129
10.1049/iet-its.2018.5144
10.1016/j.specom.2017.02.009
10.1016/j.eswa.2021.115465
10.1016/j.compbiomed.2020.104037
10.1016/j.clinimag.2021.01.019
10.1109/ICDMW.2018.00176
10.30897/ijegeo.737993
10.1016/j.media.2020.101794
10.1186/s12880-020-00529-5
10.1016/j.imu.2021.100681
10.1109/CVPR.2016.90
10.1007/s11042-020-09634-7
10.1109/ACCESS.2016.2624938
10.1016/j.compbiomed.2021.105134
10.1016/j.compbiomed.2022.105383
10.1007/s10278-021-00434-5
10.1007/978-3-319-24574-4_28
10.1109/TPAMI.2016.2572683
10.1155/2021/9999368
10.1145/3453892.3461322
10.1016/j.asoc.2020.106897
10.1016/j.cmpb.2021.106004
10.1016/j.patcog.2022.108538
10.31590/ejosat.819409
10.3390/s20113183
10.1016/j.patrec.2020.07.029
10.1109/WACV.2018.00163
10.3390/su13031224
10.3390/diagnostics11091712
10.1007/978-3-030-01234-2_49
10.1007/978-3-319-10578-9_23
10.1016/j.eswa.2020.114417
10.1002/mp.14676
10.48550/arXiv.1412.6980
10.1007/s10462-020-09854-1
10.5281/ZENODO.3757476
10.1109/ACCESS.2021.3067047
|
Deep Learning-Based Time-to-Death Prediction Model for COVID-19 Patients Using Clinical Data and Chest Radiographs.
|
Accurate estimation of mortality and time to death at admission for COVID-19 patients is important and several deep learning models have been created for this task. However, there are currently no prognostic models which use end-to-end deep learning to predict time to event for admitted COVID-19 patients using chest radiographs and clinical data. We retrospectively implemented a new artificial intelligence model combining DeepSurv (a multiple-perceptron implementation of the Cox proportional hazards model) and a convolutional neural network (CNN) using 1356 COVID-19 inpatients. For comparison, we also prepared DeepSurv only with clinical data, DeepSurv only with images (CNNSurv), and Cox proportional hazards models. Clinical data and chest radiographs at admission were used to estimate patient outcome (death or discharge) and duration to the outcome. The Harrel's concordance index (c-index) of the DeepSurv with CNN model was 0.82 (0.75-0.88) and this was significantly higher than the DeepSurv only with clinical data model (c-index = 0.77 (0.69-0.84), p = 0.011), CNNSurv (c-index = 0.70 (0.63-0.79), p = 0.001), and the Cox proportional hazards model (c-index = 0.71 (0.63-0.79), p = 0.001). These results suggest that the time-to-event prognosis model became more accurate when chest radiographs and clinical data were used together.
|
Journal of digital imaging
| 2022-08-09T00:00:00
|
[
"ToshimasaMatsumoto",
"Shannon LeighWalston",
"MichaelWalston",
"DaijiroKabata",
"YukioMiki",
"MasatsuguShiba",
"DaijuUeda"
] |
10.1007/s10278-022-00691-y
10.7861/clinmed.2020-0214
10.1186/s13613-020-00650-2
10.1093/cid/ciaa414
10.1016/S2213-8587(21)00089-9
10.1001/jama.2018.11100
10.1038/nature14539
10.1186/s12874-018-0482-1
10.1016/j.amjmed.2004.03.020
10.1007/s11547-020-01232-9
10.1007/s10140-020-01808-y
10.1148/radiol.2020201754
10.1148/radiol.2020200823
10.1007/s00330-020-06827-4
10.1007/s10278-013-9622-7
10.1136/bmj.h5527
10.1136/bmj.n2400
10.1001/jamainternmed.2021.6203
10.1056/NEJMoa2103417
10.1016/j.jiph.2021.09.023
10.1371/journal.pone.0241955
10.1038/s41586-020-2521-4
10.1023/A:1010933404324
10.1002/(SICI)1097-0258(19960229)15:4<361::AID-SIM168>3.0.CO;2-4
10.1175/1520-0493(1950)078<0001:VOFEIT>2.0.CO;2
10.1111/j.0006-341X.2005.030814.x
10.1136/bmj.m1328
10.2196/25535
10.2196/24973
10.1016/j.media.2021.102096
10.1148/ryai.2020200098
10.1038/s41598-022-07890-1
10.1038/s41598-021-93543-8
10.1038/s41598-019-43372-7
10.1016/S2589-7500(21)00039-X
10.1016/j.lfs.2020.117788
10.1186/s13054-019-2663-7
10.1001/jamanetworkopen.2020.25881
10.1001/jamanetworkopen.2020.5842
10.1002/acm2.12995
10.1007/BF00344251
10.1148/radiol.2017171183
10.1056/NEJMc2104626
|
Deep Learning-Aided Automated Pneumonia Detection and Classification Using CXR Scans.
|
The COVID-19 pandemic has caused a worldwide catastrophe and widespread devastation that reeled almost all countries. The pandemic has mounted pressure on the existing healthcare system and caused panic and desperation. The gold testing standard for COVID-19 detection, reverse transcription-polymerase chain reaction (RT-PCR), has shown its limitations with 70% accuracy, contributing to the incorrect diagnosis that exaggerated the complexities and increased the fatalities. The new variations further pose unseen challenges in terms of their diagnosis and subsequent treatment. The COVID-19 virus heavily impacts the lungs and fills the air sacs with fluid causing pneumonia. Thus, chest X-ray inspection is a viable option if the inspection detects COVID-19-induced pneumonia, hence confirming the exposure of COVID-19. Artificial intelligence and machine learning techniques are capable of examining chest X-rays in order to detect patterns that can confirm the presence of COVID-19-induced pneumonia. This research used CNN and deep learning techniques to detect COVID-19-induced pneumonia from chest X-rays. Transfer learning with fine-tuning ensures that the proposed work successfully classifies COVID-19-induced pneumonia, regular pneumonia, and normal conditions. Xception, Visual Geometry Group 16, and Visual Geometry Group 19 are used to realize transfer learning. The experimental results were promising in terms of precision, recall, F1 score, specificity, false omission rate, false negative rate, false positive rate, and false discovery rate with a COVID-19-induced pneumonia detection accuracy of 98%. Experimental results also revealed that the proposed work has not only correctly identified COVID-19 exposure but also made a distinction between COVID-19-induced pneumonia and regular pneumonia, as the latter is a very common disease, while COVID-19 is more lethal. These results mitigated the concern and overlap in the diagnosis of COVID-19-induced pneumonia and regular pneumonia. With further integrations, it can be employed as a potential standard model in differentiating the various lung-related infections, including COVID-19.
|
Computational intelligence and neuroscience
| 2022-08-09T00:00:00
|
[
"Deepak KumarJain",
"TarishiSingh",
"PraneetSaurabh",
"DhananjayBisen",
"NeerajSahu",
"JayantMishra",
"HabiburRahman"
] |
10.1155/2022/7474304
10.1109/ICDABI51230.2020.9325626
10.1001/jama.2020.1585
10.1016/s0140-6736(20)30211-710.1016/s0140-6736(20)30211-7
10.1056/NEJMoa2001316
10.1016/S0140-6736(20)30183-5
10.1093/clinchem/hvaa029
10.1148/radiol.2020200230
10.2214/AJR.20.23034
10.1007/s10489-020-01826-w
10.1109/ISMSIT.2019.8932878
10.1109/NSSMIC.2018.8824292
10.1109/ICNSC.2018.8361312
10.1007/s11517-019-01965-4
10.1109/TMI.2019.2894349
10.1148/radiol.2019181960
10.3390/app10020559
10.1016/j.compbiomed.2020.103869
10.1145/3195588.3195597
10.1148/radiol.2020200905
10.1371/journal.pmed.1002686
10.1109/ACCESS.2020.3010287
10.3390/app9194130
10.1109/TMI.2020.2994459
10.17632/9xkhgts2s6.1
10.1007/s40009-020-00979-z
10.1007/s11042-022-12775-6
|
A Novel Multi-Stage Residual Feature Fusion Network for Detection of COVID-19 in Chest X-Ray Images.
|
To suppress the spread of COVID-19, accurate diagnosis at an early stage is crucial, chest screening with radiography imaging plays an important role in addition to the real-time reverse transcriptase polymerase chain reaction (RT-PCR) swab test. Due to the limited data, existing models suffer from incapable feature extraction and poor network convergence and optimization. Accordingly, a multi-stage residual network, MSRCovXNet, is proposed for effective detection of COVID-19 from chest x-ray (CXR) images. As a shallow yet effective classifier with the ResNet-18 as the feature extractor, MSRCovXNet is optimized by fusing two proposed feature enhancement modules (FEM), i.e., low-level and high-level feature maps (LLFMs and HLFMs), which contain respectively more local information and rich semantic information, respectively. For effective fusion of these two features, a single-stage FEM (MSFEM) and a multi-stage FEM (MSFEM) are proposed to enhance the semantic feature representation of the LLFMs and the local feature representation of the HLFMs, respectively. Without ensembling other deep learning models, our MSRCovXNet has a precision of 98.9% and a recall of 94% in detection of COVID-19, which outperforms several state-of-the-art models. When evaluated on the COVIDGR dataset, an average accuracy of 82.2% is achieved, leading other methods by at least 1.2%.
|
IEEE transactions on molecular, biological, and multi-scale communications
| 2022-08-09T00:00:00
|
[
"ZhenyuFang",
"JinchangRen",
"CalumMacLellan",
"HuihuiLi",
"HuiminZhao",
"AmirHussain",
"GiancarloFortino"
] |
10.1109/TMBMC.2021.3099367
|
Deep Learning Based COVID-19 Detection Using Medical Images: Is Insufficient Data Handled Well?
|
Deep learning is a prominent method for automatic detection of COVID-19 disease using a medical dataset. This paper aims to give a perspective on the data insufficiency issue that exists in COVID-19 detection associated with deep learning. The extensive study of the available datasets comprising CT and X-ray images is presented in this paper, which can be very much useful in the context of a deep learning framework for COVID-19 detection. Moreover, various data handling techniques that are very essential in deep learning models are discussed in detail. Advanced data handling techniques and approaches to modify deep learning models are suggested to handle the data insufficiency problem in deep learning based on COVID-19 detection.
|
Current medical imaging
| 2022-08-06T00:00:00
|
[
"CarenBabu",
"RahulManohar O",
"D AbrahamChandy"
] |
10.2174/1573405618666220803123626
|
Deep Learning-Based Networks for Detecting Anomalies in Chest X-Rays.
|
X-ray images aid medical professionals in the diagnosis and detection of pathologies. They are critical, for example, in the diagnosis of pneumonia, the detection of masses, and, more recently, the detection of COVID-19-related conditions. The chest X-ray is one of the first imaging tests performed when pathology is suspected because it is one of the most accessible radiological examinations. Deep learning-based neural networks, particularly convolutional neural networks, have exploded in popularity in recent years and have become indispensable tools for image classification. Transfer learning approaches, in particular, have enabled the use of previously trained networks' knowledge, eliminating the need for large data sets and lowering the high computational costs associated with this type of network. This research focuses on using deep learning-based neural networks to detect anomalies in chest X-rays. Different convolutional network-based approaches are investigated using the ChestX-ray14 database, which contains over 100,000 X-ray images with labels relating to 14 different pathologies, and different classification objectives are evaluated. Starting with the pretrained networks VGG19, ResNet50, and Inceptionv3, networks based on transfer learning are implemented, with different schemes for the classification stage and data augmentation. Similarly, an ad hoc architecture is proposed and evaluated without transfer learning for the classification objective with more examples. The results show that transfer learning produces acceptable results in most of the tested cases, indicating that it is a viable first step for using deep networks when there are not enough labeled images, which is a common problem when working with medical images. The ad hoc network, on the other hand, demonstrated good generalization with data augmentation and an acceptable accuracy value. The findings suggest that using convolutional neural networks with and without transfer learning to design classifiers for detecting pathologies in chest X-rays is a good idea.
|
BioMed research international
| 2022-08-03T00:00:00
|
[
"MalekBadr",
"ShahaAl-Otaibi",
"NazikAlturki",
"TanvirAbir"
] |
10.1155/2022/7833516
10.1201/b10866-37
10.1109/CVPR.2017.369
10.1109/ICSCCC.2018.8703316
10.1016/B978-0-12-816718-2.00008-7
10.1155/2022/1959371
10.1007/978-981-15-4112-4_7
10.3390/jcm11072054
10.23919/MIPRO48935.2020.9245376
10.1155/2022/4569879
10.14569/IJACSA.2021.0121026
10.1109/ELNANO.2018.8477564
10.1155/2021/8148772
10.1155/2022/3294954
10.1007/s00607-021-00992-0
10.1155/2021/5759184
10.1155/2021/6799202
10.24191/mjoc.v4i1.6095
10.1007/s11548-020-02305-w
10.1155/2021/1220374
10.1117/12.2293971
10.1007/s10916-021-01745-4
|
Detecting COVID-19 patients via MLES-Net deep learning models from X-Ray images.
|
Corona Virus Disease 2019 (COVID-19) first appeared in December 2019, and spread rapidly around the world. COVID-19 is a pneumonia caused by novel coronavirus infection in 2019. COVID-19 is highly infectious and transmissible. By 7 May 2021, the total number of cumulative number of deaths is 3,259,033. In order to diagnose the infected person in time to prevent the spread of the virus, the diagnosis method for COVID-19 is extremely important. To solve the above problems, this paper introduces a Multi-Level Enhanced Sensation module (MLES), and proposes a new convolutional neural network model, MLES-Net, based on this module.
Attention has the ability to automatically focus on the key points in various information, and Attention can realize parallelism, which can replace some recurrent neural networks to a certain extent and improve the efficiency of the model. We used the correlation between global and local features to generate the attention mask. First, the feature map was divided into multiple groups, and the initial attention mask was obtained by the dot product of each feature group and the feature after the global pooling. Then the attention masks were normalized. At the same time, there were two scaling and translating parameters in each group so that the normalize operation could be restored. Then, the final attention mask was obtained through the sigmoid function, and the feature of each location in the original feature group was scaled. Meanwhile, we use different classifiers on the network models with different network layers.
The network uses three classifiers, FC module (fully connected layer), GAP module (global average pooling layer) and GAPFC module (global average pooling layer and fully connected layer), to improve recognition efficiency. GAPFC as a classifier can obtain the best comprehensive effect by comparing the number of parameters, the amount of calculation and the detection accuracy. The experimental results show that the MLES-Net56-GAPFC achieves the best overall accuracy rate (95.27%) and the best recognition rate for COVID-19 category (100%).
MLES-Net56-GAPFC has good classification ability for the characteristics of high similarity between categories of COVID-19 X-Ray images and low intra-category variability. Considering the factors such as accuracy rate, number of network model parameters and calculation amount, we believe that the MLES-Net56-GAPFC network model has better practicability.
|
BMC medical imaging
| 2022-07-31T00:00:00
|
[
"WeiWang",
"YongbinJiang",
"XinWang",
"PengZhang",
"JiLi"
] |
10.1186/s12880-022-00861-y
10.1016/j.physio.2020.03.003
10.1109/5.726791
10.1109/TIP.2017.2710620
10.2991/ijcis.d.191209.001
10.1186/s12880-019-0399-0
10.1109/TUFFC.2020.3005512
10.1109/ACCESS.2020.3001973
10.7150/ijms.46684
10.1109/TMI.2020.2995508
10.1007/s42979-020-00401-x
10.1007/s42979-020-00335-4
10.1007/s42979-020-00300-1
10.1109/ACCESS.2021.3058537
10.1007/s42979-020-00216-w
10.1007/s42979-020-00383-w
10.2991/ijcis.d.201123.001
10.1016/j.compbiomed.2020.103792
10.1016/j.compbiomed.2020.103869
10.1109/ACCESS.2020.3003810
10.1371/journal.pone.0235187
10.1016/j.imu.2020.100412
10.1049/ipr2.12474
10.1016/j.imu.2020.100505
|
A comparison of Covid-19 early detection between convolutional neural networks and radiologists.
|
The role of chest radiography in COVID-19 disease has changed since the beginning of the pandemic from a diagnostic tool when microbiological resources were scarce to a different one focused on detecting and monitoring COVID-19 lung involvement. Using chest radiographs, early detection of the disease is still helpful in resource-poor environments. However, the sensitivity of a chest radiograph for diagnosing COVID-19 is modest, even for expert radiologists. In this paper, the performance of a deep learning algorithm on the first clinical encounter is evaluated and compared with a group of radiologists with different years of experience.
The algorithm uses an ensemble of four deep convolutional networks, Ensemble4Covid, trained to detect COVID-19 on frontal chest radiographs. The algorithm was tested using images from the first clinical encounter of positive and negative cases. Its performance was compared with five radiologists on a smaller test subset of patients. The algorithm's performance was also validated using the public dataset COVIDx.
Compared to the consensus of five radiologists, the Ensemble4Covid model achieved an AUC of 0.85, whereas the radiologists achieved an AUC of 0.71. Compared with other state-of-the-art models, the performance of a single model of our ensemble achieved nonsignificant differences in the public dataset COVIDx.
The results show that the use of images from the first clinical encounter significantly drops the detection performance of COVID-19. The performance of our Ensemble4Covid under these challenging conditions is considerably higher compared to a consensus of five radiologists. Artificial intelligence can be used for the fast diagnosis of COVID-19.
|
Insights into imaging
| 2022-07-29T00:00:00
|
[
"AlbertoAlbiol",
"FranciscoAlbiol",
"RobertoParedes",
"Juana MaríaPlasencia-Martínez",
"AnaBlanco Barrio",
"José M GarcíaSantos",
"SalvadorTortajada",
"Victoria MGonzález Montaño",
"Clara ERodríguez Godoy",
"SarayFernández Gómez",
"ElenaOliver-Garcia",
"Maríade la Iglesia Vayá",
"Francisca LMárquez Pérez",
"Juan IRayo Madrid"
] |
10.1186/s13244-022-01250-3
10.1001/JAMA.2020.21694
10.1007/S00330-020-07347-X
10.1007/S00330-020-06967-7
10.1148/RADIOL.2020201160/ASSET/IMAGES/LARGE/RADIOL.2020201160.FIG6.JPEG
10.1148/RADIOL.2020202944/ASSET/IMAGES/LARGE/RADIOL.2020202944.TBL4.JPEG
10.1148/RADIOL.2020203511/ASSET/IMAGES/LARGE/RADIOL.2020203511.FIG6C.JPEG
10.1148/RADIOL.2021204522/ASSET/IMAGES/LARGE/RADIOL.2021204522.FIG8C.JPEG
10.1109/TMI.2020.2993291
10.1007/s00330-020-07354-y
10.1007/s00330-020-07270-1
10.1148/RADIOL.2020201874
10.1016/J.MAYOCP.2020.07.024
10.1109/TKDE.2009.191
10.1037/H0031619
10.1214/ss/1177013815
10.1002/1097-0142
10.1148/RADIOL.2020201365/ASSET/IMAGES/LARGE/RADIOL.2020201365.TBL2.JPEG
10.1016/J.JACR.2019.05.019
10.1186/S41747-020-00203-Z/FIGURES/3
10.1148/RADIOL.2020204226
|
Automatic scoring of COVID-19 severity in X-ray imaging based on a novel deep learning workflow.
|
In this study, we propose a two-stage workflow used for the segmentation and scoring of lung diseases. The workflow inherits quantification, qualification, and visual assessment of lung diseases on X-ray images estimated by radiologists and clinicians. It requires the fulfillment of two core stages devoted to lung and disease segmentation as well as an additional post-processing stage devoted to scoring. The latter integrated block is utilized, mainly, for the estimation of segment scores and computes the overall severity score of a patient. The models of the proposed workflow were trained and tested on four publicly available X-ray datasets of COVID-19 patients and two X-ray datasets of patients with no pulmonary pathology. Based on a combined dataset consisting of 580 COVID-19 patients and 784 patients with no disorders, our best-performing algorithm is based on a combination of DeepLabV3 + , for lung segmentation, and MA-Net, for disease segmentation. The proposed algorithms' mean absolute error (MAE) of 0.30 is significantly reduced in comparison to established COVID-19 algorithms; BS-net and COVID-Net-S, possessing MAEs of 2.52 and 1.83 respectively. Moreover, the proposed two-stage workflow was not only more accurate but also computationally efficient, it was approximately 11 times faster than the mentioned methods. In summary, we proposed an accurate, time-efficient, and versatile approach for segmentation and scoring of lung diseases illustrated for COVID-19 and with broader future applications for pneumonia, tuberculosis, pneumothorax, amongst others.
|
Scientific reports
| 2022-07-28T00:00:00
|
[
"Viacheslav VDanilov",
"DianaLitmanovich",
"AlexProutski",
"AlexanderKirpich",
"DatoNefaridze",
"AlexKarpovsky",
"YuriyGankin"
] |
10.1038/s41598-022-15013-z
10.2139/ssrn.3685938
10.1093/cid/ciaa1012
10.1016/j.jaci.2020.04.006
10.1016/j.cmi.2020.04.012
10.1148/ryct.2020200034
10.1148/radiol.2020200527
10.1016/j.chest.2020.04.003
10.1007/s11547-020-01202-1
10.1007/s10489-020-01829-7
10.1016/j.imu.2021.100835
10.1016/j.compbiomed.2020.103869
10.1007/s13755-020-00116-6
10.1016/j.eswa.2020.114054
10.1007/s42600-020-00091-7
10.1109/ACCESS.2020.3025372
10.17632/8gf9vpkhgy.1
10.17632/36fjrg9s69.1
10.1016/j.media.2021.102046
10.1038/s41598-021-88538-4
10.1177/0885066603251897
10.1371/journal.pone.0093885
10.1186/s12931-019-1201-0
10.1038/s41572-018-0051-2
10.1371/journal.pone.0197418
10.1186/s12880-015-0103-y
10.1136/thoraxjnl-2017-211280
10.3389/fphys.2021.672823
10.1186/s12890-020-01286-5
10.1148/radiol.2020201160
10.1038/s41598-020-79470-0
10.1016/j.ijid.2020.05.021
10.1007/s00330-020-07270-1
10.11613/BM.2012.031
10.1109/TIP.2010.2044963
10.1016/j.ijmedinf.2014.10.004
10.1016/j.afjem.2020.09.009
|
Mortality Prediction Analysis among COVID-19 Inpatients Using Clinical Variables and Deep Learning Chest Radiography Imaging Features.
|
The emergence of the COVID-19 pandemic over a relatively brief interval illustrates the need for rapid data-driven approaches to facilitate clinical decision making. We examined a machine learning process to predict inpatient mortality among COVID-19 patients using clinical and chest radiographic data. Modeling was performed with a de-identified dataset of encounters prior to widespread vaccine availability. Non-imaging predictors included demographics, pre-admission clinical history, and past medical history variables. Imaging features were extracted from chest radiographs by applying a deep convolutional neural network with transfer learning. A multi-layer perceptron combining 64 deep learning features from chest radiographs with 98 patient clinical features was trained to predict mortality. The Local Interpretable Model-Agnostic Explanations (LIME) method was used to explain model predictions. Non-imaging data alone predicted mortality with an ROC-AUC of 0.87 ± 0.03 (mean ± SD), while the addition of imaging data improved prediction slightly (ROC-AUC: 0.91 ± 0.02). The application of LIME to the combined imaging and clinical model found HbA1c values to contribute the most to model prediction (17.1 ± 1.7%), while imaging contributed 8.8 ± 2.8%. Age, gender, and BMI contributed 8.7%, 8.2%, and 7.1%, respectively. Our findings demonstrate a viable explainable AI approach to quantify the contributions of imaging and clinical data to COVID mortality predictions.
|
Tomography (Ann Arbor, Mich.)
| 2022-07-28T00:00:00
|
[
"Xuan VNguyen",
"EnginDikici",
"SemaCandemir",
"Robyn LBall",
"Luciano MPrevedello"
] |
10.3390/tomography8040151
10.1038/s41586-020-2008-3
10.1148/radiol.2020200490
10.1002/path.5549
10.1148/radiol.2020200642
10.1109/RBME.2020.2987975
10.1016/j.bbe.2020.08.008
10.1038/s41598-020-76550-z
10.1038/s41591-020-0931-3
10.1109/TMI.2020.2993291
10.1148/radiol.2020204226
10.1186/s40537-016-0043-6
10.1007/s10278-013-9622-7
10.1007/978-3-319-24574-4_28
10.2214/ajr.174.1.1740071
10.1016/j.media.2005.02.002
10.1371/journal.pone.0190069
10.1109/ACCESS.2021.3086020
10.1109/ACCESS.2020.2976199
10.1016/S2589-7500(21)00039-X
10.1007/s00330-022-08588-8
10.1183/13993003.02113-2020
10.7717/peerj.10337
10.1186/s12911-021-01742-0
10.7717/peerj-cs.889
10.3389/fdgth.2021.681608
10.3390/diagnostics11081383
10.1002/dmrr.3476
|
Federated Learning Approach with Pre-Trained Deep Learning Models for COVID-19 Detection from Unsegmented CT images.
|
(1) Background: Coronavirus disease 2019 (COVID-19) is an infectious disease caused by SARS-CoV-2. Reverse transcription polymerase chain reaction (RT-PCR) remains the current gold standard for detecting SARS-CoV-2 infections in nasopharyngeal swabs. In Romania, the first reported patient to have contracted COVID-19 was officially declared on 26 February 2020. (2) Methods: This study proposes a federated learning approach with pre-trained deep learning models for COVID-19 detection. Three clients were locally deployed with their own dataset. The goal of the clients was to collaborate in order to obtain a global model without sharing samples from the dataset. The algorithm we developed was connected to our internal picture archiving and communication system and, after running backwards, it encountered chest CT changes suggestive for COVID-19 in a patient investigated in our medical imaging department on the 28 January 2020. (4) Conclusions: Based on our results, we recommend using an automated AI-assisted software in order to detect COVID-19 based on the lung imaging changes as an adjuvant diagnostic method to the current gold standard (RT-PCR) in order to greatly enhance the management of these patients and also limit the spread of the disease, not only to the general population but also to healthcare professionals.
|
Life (Basel, Switzerland)
| 2022-07-28T00:00:00
|
[
"Lucian MihaiFlorescu",
"Costin TeodorStreba",
"Mircea-SebastianŞerbănescu",
"MădălinMămuleanu",
"Dan NicolaeFlorescu",
"Rossy VlăduţTeică",
"Raluca ElenaNica",
"Ioana AndreeaGheonea"
] |
10.3390/life12070958
10.3389/fmicb.2020.631736
10.1002/jmv.25766
10.1056/NEJMoa2001017
10.3390/life12010077
10.1148/ryct.2020200034
10.1016/j.jmoldx.2021.04.009
10.47162/RJME.61.2.21
10.1007/s00330-021-07937-3
10.1111/exsy.12759
10.1038/nature14539
10.1016/j.patcog.2021.108081
10.1016/j.asoc.2020.106912
10.1007/s13246-020-00865-4
10.1016/j.compbiomed.2020.103869
10.3390/diagnostics10060358
10.1016/j.imu.2020.100360
10.1109/ACCESS.2020.3010287
10.1007/s00264-020-04609-7
10.1016/j.cmpb.2020.105608
10.1016/j.cmpb.2020.105581
10.1016/j.eswa.2020.114054
10.1016/j.compbiomed.2020.103795
10.1007/s10489-020-01826-w
10.2196/19569
10.1183/13993003.00775-2020
10.1016/j.asoc.2021.107330
10.1109/JSEN.2021.3076767
10.7910/DVN/6ACUZJ
10.17632/3y55vgckg6.2
10.1148/radiol.11092149
10.12968/hmed.2020.0077
10.5114/pjr.2021.103237
10.1016/j.ijid.2014.12.007
10.1016/j.crad.2016.06.110
10.1148/radiographics.21.2.g01mr17403
10.17632/ygvgkdbmvt.1
10.7937/TCIA.2020.NNC2-0461
10.1073/pnas.79.8.2554
10.1109/EMBC.2017.8037515
10.1007/s10278-021-00508-4
10.21037/jtd.2017.03.157
10.1167/tvst.9.2.14
10.1364/AO.29.004790
10.1016/j.jbi.2014.05.006
10.1016/j.jacr.2022.03.015
10.1109/TCOMM.2020.2990686
10.11919/j.issn.1002-0829.215010
10.1007/s11263-019-01228-7
|
Bag of Tricks for Improving Deep Learning Performance on Multimodal Image Classification.
|
A comprehensive medical image-based diagnosis is usually performed across various image modalities before passing a final decision; hence, designing a deep learning model that can use any medical image modality to diagnose a particular disease is of great interest. The available methods are multi-staged, with many computational bottlenecks in between. This paper presents an improved end-to-end method of multimodal image classification using deep learning models. We present top research methods developed over the years to improve models trained from scratch and transfer learning approaches. We show that when fully trained, a model can first implicitly discriminate the imaging modality and then diagnose the relevant disease. Our developed models were applied to COVID-19 classification from chest X-ray, CT scan, and lung ultrasound image modalities. The model that achieved the highest accuracy correctly maps all input images to their respective modality, then classifies the disease achieving overall 91.07% accuracy.
|
Bioengineering (Basel, Switzerland)
| 2022-07-26T00:00:00
|
[
"Steve AAdeshina",
"Adeyinka PAdedigba"
] |
10.3390/bioengineering9070312
10.1111/exd.13777
10.1109/ACCESS.2020.3016780
10.3390/diagnostics10080565
10.1007/s40747-021-00321-0
10.31083/j.fbl2707198
10.1101/2020.04.24.20078584
10.1109/ACCESS.2020.3010287
10.48550/arXiv.1907.08610
10.1016/j.ibmed.2021.100034
10.3390/bioengineering9040161
|
A Novel Deep Learning and Ensemble Learning Mechanism for Delta-Type COVID-19 Detection.
|
Recently, the novel coronavirus disease 2019 (COVID-19) has posed many challenges to the research community by presenting grievous severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) that results in a huge number of mortalities and high morbidities worldwide. Furthermore, the symptoms-based variations in virus type add new challenges for the research and practitioners to combat. COVID-19-infected patients comprise trenchant radiographic visual features, including dry cough, fever, dyspnea, fatigue, etc. Chest X-ray is considered a simple and non-invasive clinical adjutant that performs a key role in the identification of these ocular responses related to COVID-19 infection. Nevertheless, the defined availability of proficient radiologists to understand the X-ray images and the elusive aspects of disease radiographic replies to remnant the biggest bottlenecks in manual diagnosis. To address these issues, the proposed research study presents a hybrid deep learning model for the accurate diagnosing of Delta-type COVID-19 infection using X-ray images. This hybrid model comprises visual geometry group 16 (VGG16) and a support vector machine (SVM), where the VGG16 is accustomed to the identification process, while the SVM is used for the severity-based analysis of the infected people. An overall accuracy rate of 97.37% is recorded for the assumed model. Other performance metrics such as the area under the curve (AUC), precision, F-score, misclassification rate, and confusion matrix are used for validation and analysis purposes. Finally, the applicability of the presumed model is assimilated with other relevant techniques. The high identification rates shine the applicability of the formulated hybrid model in the targeted research domain.
|
Frontiers in public health
| 2022-07-26T00:00:00
|
[
"Habib UllahKhan",
"SulaimanKhan",
"ShahNazir"
] |
10.3389/fpubh.2022.875971
10.1016/j.compbiomed.2020.103805
10.1016/j.eswa.2020.114054
10.1109/INMIC50486.2020.9318212
10.1007/s10489-020-01902-1
10.1109/MITP.2020.3036820
10.1016/j.eswa.2020.113909
10.1056/NEJMoa2001191
10.1016/j.ijid.2020.01.009
10.1056/NEJMc2001468
10.1016/j.compeleceng.2020.106906
10.32604/cmc.2021.013878
10.1007/s10044-021-00970-4
10.1038/s41598-020-76550-z
10.1093/jamia/ocaa280
10.3390/sym12040651
10.1016/j.engappai.2019.03.021
10.1016/j.advengsoft.2017.05.014
10.1016/j.engappai.2020.103541
10.1038/s41598-020-71294-2
10.22581/muet1982.2101.14
10.1177/0020294020964826
|
A Deep Learning and Handcrafted Based Computationally Intelligent Technique for Effective COVID-19 Detection from X-ray/CT-scan Imaging.
|
The world has witnessed dramatic changes because of the advent of COVID19 in the last few days of 2019. During the last more than two years, COVID-19 has badly affected the world in diverse ways. It has not only affected human health and mortality rate but also the economic condition on a global scale. There is an urgent need today to cope with this pandemic and its diverse effects. Medical imaging has revolutionized the treatment of various diseases during the last four decades. Automated detection and classification systems have proven to be of great assistance to the doctors and scientific community for the treatment of various diseases. In this paper, a novel framework for an efficient COVID-19 classification system is proposed which uses the hybrid feature extraction approach. After preprocessing image data, two types of features i.e., deep learning and handcrafted, are extracted. For Deep learning features, two pre-trained models namely ResNet101 and DenseNet201 are used. Handcrafted features are extracted using Weber Local Descriptor (WLD). The Excitation component of WLD is utilized and features are reduced using DCT. Features are extracted from both models, handcrafted features are fused, and significant features are selected using entropy. Experiments have proven the effectiveness of the proposed model. A comprehensive set of experiments have been performed and results are compared with the existing well-known methods. The proposed technique has performed better in terms of accuracy and time.
|
Journal of grid computing
| 2022-07-26T00:00:00
|
[
"MohammedHabib",
"MuhammadRamzan",
"Sajid AliKhan"
] |
10.1007/s10723-022-09615-0
10.1109/ACCESS.2020.2999468
10.1007/s11063-018-09976-2
10.1109/ACCESS.2017.2789324
10.1007/s10723-020-09506-2
10.1007/s10723-021-09594-8
10.1007/s10723-021-09564-0
10.1016/j.compbiomed.2018.03.016
10.1007/s10723-020-09513-3
10.1007/s10723-021-09590-y
10.1016/j.diii.2020.03.014
10.1016/j.chaos.2020.109944
10.1016/j.compbiomed.2021.104453
10.1016/j.chemolab.2020.104054
10.1016/j.bspc.2021.102987
10.1016/j.bbe.2021.05.013
10.1016/j.compbiomed.2021.104306
10.1016/j.eswa.2021.115650
10.1016/j.bspc.2021.102602
10.1016/j.bspc.2021.102588
10.1016/j.clinimag.2021.07.004
10.1016/j.iot.2021.100377
10.1016/j.asoc.2021.107184
10.1016/j.eswa.2021.114883
10.1007/s10489-021-02393-4
10.1007/s10489-020-01867-1
10.1007/s11042-021-11192-5
10.1023/B:VLSI.0000028532.53893.82
10.1109/TPAMI.2009.155
10.1109/T-C.1974.223784
10.1109/TNNLS.2020.2966319
10.1016/j.compbiomed.2020.103792
10.1007/s13246-020-00865-4
10.1007/s13246-020-00952-6
10.1109/ACCESS.2020.2994762
|
An efficient deep learning-based framework for tuberculosis detection using chest X-ray images.
|
Early diagnosis of tuberculosis (TB) is an essential and challenging task to prevent disease, decrease mortality risk, and stop transmission to other people. The chest X-ray (CXR) is the top choice for lung disease screening in clinics because it is cost-effective and easily accessible in most countries. However, manual screening of CXR images is a heavy burden for radiologists, resulting in a high rate of inter-observer variances. Hence, proposing a cost-effective and accurate computer aided diagnosis (CAD) system for TB diagnosis is challenging for researchers. In this research, we proposed an efficient and straightforward deep learning network called TBXNet, which can accurately classify a large number of TB CXR images. The network is based on five dual convolutions blocks with varying filter sizes of 32, 64, 128, 256 and 512, respectively. The dual convolution blocks are fused with a pre-trained layer in the fusion layer of the network. In addition, the pre-trained layer is utilized for transferring pre-trained knowledge into the fusion layer. The proposed TBXNet has achieved an accuracy of 98.98%, and 99.17% on Dataset A and Dataset B, respectively. Furthermore, the generalizability of the proposed work is validated against Dataset C, which is based on normal, tuberculous, pneumonia, and COVID-19 CXR images. The TBXNet has obtained the highest results in Precision (95.67%), Recall (95.10%), F1-score (95.38%), and Accuracy (95.10%), which is comparatively better than all other state-of-the-art methods.
|
Tuberculosis (Edinburgh, Scotland)
| 2022-07-26T00:00:00
|
[
"AhmedIqbal",
"MuhammadUsman",
"ZohairAhmed"
] |
10.1016/j.tube.2022.102234
|
Multi-population generalizability of a deep learning-based chest radiograph severity score for COVID-19.
|
To tune and test the generalizability of a deep learning-based model for assessment of COVID-19 lung disease severity on chest radiographs (CXRs) from different patient populations. A published convolutional Siamese neural network-based model previously trained on hospitalized patients with COVID-19 was tuned using 250 outpatient CXRs. This model produces a quantitative measure of COVID-19 lung disease severity (pulmonary x-ray severity (PXS) score). The model was evaluated on CXRs from 4 test sets, including 3 from the United States (patients hospitalized at an academic medical center (N = 154), patients hospitalized at a community hospital (N = 113), and outpatients (N = 108)) and 1 from Brazil (patients at an academic medical center emergency department (N = 303)). Radiologists from both countries independently assigned reference standard CXR severity scores, which were correlated with the PXS scores as a measure of model performance (Pearson R). The Uniform Manifold Approximation and Projection (UMAP) technique was used to visualize the neural network results. Tuning the deep learning model with outpatient data showed high model performance in 2 United States hospitalized patient datasets (R = 0.88 and R = 0.90, compared to baseline R = 0.86). Model performance was similar, though slightly lower, when tested on the United States outpatient and Brazil emergency department datasets (R = 0.86 and R = 0.85, respectively). UMAP showed that the model learned disease severity information that generalized across test sets. A deep learning model that extracts a COVID-19 severity score on CXRs showed generalizable performance across multiple populations from 2 continents, including outpatients and hospitalized patients.
|
Medicine
| 2022-07-23T00:00:00
|
[
"Matthew DLi",
"Nishanth TArun",
"MehakAggarwal",
"SharutGupta",
"PraveerSingh",
"Brent PLittle",
"Dexter PMendoza",
"Gustavo C ACorradi",
"Marcelo STakahashi",
"Suely FFerraciolli",
"Marc DSucci",
"MinLang",
"Bernardo CBizzo",
"IttaiDayan",
"Felipe CKitamura",
"JayashreeKalpathy-Cramer"
] |
10.1097/MD.0000000000029587
|
Ftl-CoV19: A Transfer Learning Approach to Detect COVID-19.
|
COVID-19 is an infectious and contagious disease caused by the new coronavirus. The total number of cases is over 19 million and continues to grow. A common symptom noticed among COVID-19 patients is lung infection that results in breathlessness, and the lack of essential resources such as testing, oxygen, and ventilators enhances its severity. Chest X-ray can be used to design and develop a COVID-19 detection mechanism for a quicker diagnosis using AI and machine learning techniques. Due to this silver lining, various new COVID-19 detection techniques and prediction models have been introduced in recent times based on chest radiography images. However, due to a high level of unpredictability and the absence of essential data, standard models have showcased low efficiency and also suffer from overheads and complexities. This paper proposes a model fine tuning transfer learning-coronavirus 19 (Ftl-CoV19) for COVID-19 detection through chest X-rays, which embraces the ideas of transfer learning in pretrained VGG16 model with including combination of convolution, max pooling, and dense layer at different stages of model. Ftl-CoV19 reported promising experimental results; it observed training and validation accuracy of 98.82% and 99.27% with precision of 100%, recall of 98%, and F1 score of 99%. These results outperformed other conventional state of arts such as CNN, ResNet50, InceptionV3, and Xception.
|
Computational intelligence and neuroscience
| 2022-07-23T00:00:00
|
[
"TarishiSingh",
"PraneetSaurabh",
"DhananjayBisen",
"LalitKane",
"MayankPathak",
"G RSinha"
] |
10.1155/2022/1953992
10.15557/pimr.2020.0024
10.1016/j.genrep.2020.100756
10.1109/TAI.2021.3062771
10.1109/tmi.2020.2995508
10.1109/ACCESS.2020.2997311
10.1109/RBME.2020.2987975
10.1109/CANDO-EPE51100.2020.9337794
10.1109/TCYB.2019.2950779
10.1016/j.numecd.2020.07.031
10.1155/2020/9756518
10.1101/2020.10.13.20212035
10.1101/2020.10.13.20212035
10.1109/mpuls.2020.3008354
10.1109/access.2020.3009328
10.1109/tmi.2020.2993291
10.1109/tmi.2020.2995965
10.1109/ACCESS.2018.2814605
10.1016/j.imu.2020.100360
10.1109/ACCESS.2019.2946000
10.17632/9xkhgts2s6.1
10.1007/s10489-020-01826-w
10.48550/arXiv.1409.1556
10.3390/s19163556
10.1007/s11042-020-10038-w
|
Automated diagnosis and prognosis of COVID-19 pneumonia from initial ER chest X-rays using deep learning.
|
Airspace disease as seen on chest X-rays is an important point in triage for patients initially presenting to the emergency department with suspected COVID-19 infection. The purpose of this study is to evaluate a previously trained interpretable deep learning algorithm for the diagnosis and prognosis of COVID-19 pneumonia from chest X-rays obtained in the ED.
This retrospective study included 2456 (50% RT-PCR positive for COVID-19) adult patients who received both a chest X-ray and SARS-CoV-2 RT-PCR test from January 2020 to March of 2021 in the emergency department at a single U.S.
A total of 2000 patients were included as an additional training cohort and 456 patients in the randomized internal holdout testing cohort for a previously trained Siemens AI-Radiology Companion deep learning convolutional neural network algorithm. Three cardiothoracic fellowship-trained radiologists systematically evaluated each chest X-ray and generated an airspace disease area-based severity score which was compared against the same score produced by artificial intelligence. The interobserver agreement, diagnostic accuracy, and predictive capability for inpatient outcomes were assessed. Principal statistical tests used in this study include both univariate and multivariate logistic regression.
Overall ICC was 0.820 (95% CI 0.790-0.840). The diagnostic AUC for SARS-CoV-2 RT-PCR positivity was 0.890 (95% CI 0.861-0.920) for the neural network and 0.936 (95% CI 0.918-0.960) for radiologists. Airspace opacities score by AI alone predicted ICU admission (AUC = 0.870) and mortality (0.829) in all patients. Addition of age and BMI into a multivariate log model improved mortality prediction (AUC = 0.906).
The deep learning algorithm provides an accurate and interpretable assessment of the disease burden in COVID-19 pneumonia on chest radiographs. The reported severity scores correlate with expert assessment and accurately predicts important clinical outcomes. The algorithm contributes additional prognostic information not currently incorporated into patient management.
|
BMC infectious diseases
| 2022-07-22T00:00:00
|
[
"Jordan HChamberlin",
"GilbertoAquino",
"SophiaNance",
"AndrewWortham",
"NathanLeaphart",
"NamrataPaladugu",
"SeanBrady",
"HenryBaird",
"MatthewFiegel",
"LoganFitzpatrick",
"MadisonKocher",
"FlorinGhesu",
"AwaisMansoor",
"PhilippHoelzer",
"MathisZimmermann",
"W EnnisJames",
"D JamesonDennis",
"Brian AHouston",
"Ismail MKabakus",
"DhirajBaruah",
"U JosephSchoepf",
"Jeremy RBurt"
] |
10.1186/s12879-022-07617-7
10.1136/bmj.m2426
10.1007/s11547-020-01232-9
10.1016/j.ijid.2020.05.021
10.1186/s41747-020-00195-w
10.1148/radiol.2021219021
10.1148/ryct.2020200028
10.1186/s43055-020-00296-x
10.1148/ryct.2020200337
10.1148/radiol.2021219022
10.1136/bmj.m1328
10.1016/j.jiph.2020.06.028
10.1109/RBME.2020.2987975
10.1038/s41598-021-93719-2
10.1038/s42256-021-00307-0
10.1148/radiol.2020202944
10.1148/ryai.2020200079
10.1371/journal.pone.0236621
10.1148/ryct.2020200280
10.1148/ryai.2020200029
10.1001/jamanetworkopen.2021.41096
10.1109/TPAMI.2018.2858826
10.1371/journal.pmed.1002707
10.1016/j.chest.2020.04.003
10.1148/ryai.2020190043
10.1016/j.compbiomed.2021.104665
10.18280/ts.370313
|
Simplified Transfer Learning for Chest Radiography Models Using Less Data.
|
Background Developing deep learning models for radiology requires large data sets and substantial computational resources. Data set size limitations can be further exacerbated by distribution shifts, such as rapid changes in patient populations and standard of care during the COVID-19 pandemic. A common partial mitigation is transfer learning by pretraining a "generic network" on a large nonmedical data set and then fine-tuning on a task-specific radiology data set. Purpose To reduce data set size requirements for chest radiography deep learning models by using an advanced machine learning approach (supervised contrastive [SupCon] learning) to generate chest radiography networks. Materials and Methods SupCon helped generate chest radiography networks from 821 544 chest radiographs from India and the United States. The chest radiography networks were used as a starting point for further machine learning model development for 10 prediction tasks (eg, airspace opacity, fracture, tuberculosis, and COVID-19 outcomes) by using five data sets comprising 684 955 chest radiographs from India, the United States, and China. Three model development setups were tested (linear classifier, nonlinear classifier, and fine-tuning the full network) with different data set sizes from eight to 8
|
Radiology
| 2022-07-20T00:00:00
|
[
"Andrew BSellergren",
"ChristinaChen",
"ZaidNabulsi",
"YuanzhenLi",
"AaronMaschinot",
"AaronSarna",
"JennyHuang",
"CharlesLau",
"Sreenivasa RajuKalidindi",
"MozziyarEtemadi",
"FlorenciaGarcia-Vicente",
"DavidMelnick",
"YunLiu",
"KrishEswaran",
"DanielTse",
"NeeralBeladia",
"DilipKrishnan",
"ShravyaShetty"
] |
10.1148/radiol.212482
|
COVID-19 Classification from Chest X-Ray Images: A Framework of Deep Explainable Artificial Intelligence.
|
COVID-19 detection and classification using chest X-ray images is a current hot research topic based on the important application known as medical image analysis. To halt the spread of COVID-19, it is critical to identify the infection as soon as possible. Due to time constraints and the expertise of radiologists, manually diagnosing this infection from chest X-ray images is a difficult and time-consuming process. Artificial intelligence techniques have had a significant impact on medical image analysis and have also introduced several techniques for COVID-19 diagnosis. Deep learning and explainable AI have shown significant popularity among AL techniques for COVID-19 detection and classification. In this work, we propose a deep learning and explainable AI technique for the diagnosis and classification of COVID-19 using chest X-ray images. Initially, a hybrid contrast enhancement technique is proposed and applied to the original images that are later utilized for the training of two modified deep learning models. The deep transfer learning concept is selected for the training of pretrained modified models that are later employed for feature extraction. Features of both deep models are fused using improved canonical correlation analysis that is further optimized using a hybrid algorithm named Whale-Elephant Herding. Through this algorithm, the best features are selected and classified using an extreme learning machine (ELM). Moreover, the modified deep models are utilized for Grad-CAM visualization. The experimental process was conducted on three publicly available datasets and achieved accuracies of 99.1, 98.2, and 96.7%, respectively. Moreover, the ablation study was performed and showed that the proposed accuracy is better than the other methods.
|
Computational intelligence and neuroscience
| 2022-07-19T00:00:00
|
[
"Muhammad AttiqueKhan",
"MariumAzhar",
"KainatIbrar",
"AbdullahAlqahtani",
"ShtwaiAlsubai",
"AdelBinbusayyis",
"Ye JinKim",
"ByoungcholChang"
] |
10.1155/2022/4254631
10.3390/diagnostics12030741
10.1038/s41598-022-10723-w
10.1371/journal.pone.0246772
10.1016/j.genhosppsych.2020.07.006
10.1016/j.eng.2020.04.010
10.1002/1096-9071(200103)63:3<259::aid-jmv1010>3.0.co;2-x
10.1016/j.eswa.2020.114054
10.7326/m20-1382
10.1007/s00330-021-07715-1
10.1109/tmi.2016.2553401
10.1109/trpms.2019.2896399
10.1049/trit.2019.0028
10.1016/j.compag.2021.106081
10.3390/e23060667
10.1007/978-3-030-27272-2_14
10.1007/s11042-019-08111-0
10.1080/03772063.2017.1331757
10.1007/s11517-020-02302-w
10.1016/j.neucom.2021.03.035
10.1109/tpami.2016.2644615
10.1016/j.artmed.2021.102114
10.1109/tip.2021.3058783
10.3390/s21020455
10.1038/s41598-021-95680-6
10.36227/techrxiv.15135846.v1
10.1109/jbhi.2021.3074893
10.3390/healthcare9091099
10.1007/s00521-020-05636-6
10.3390/a14110337
10.3390/s21165657
10.1038/s42256-021-00338-7
10.1109/cbms52027.2021.00103
10.1016/j.advengsoft.2016.01.008
10.1109/iscbi.2015.8
10.1109/iccv.2017.74
|
A Deep Learning Approach to Identify Chest Computed Tomography Features for Prediction of SARS-CoV-2 Infection Outcomes.
|
There is still an urgent need to develop effective treatments to help minimize the cases of severe COVID-19. A number of tools have now been developed and applied to address these issues, such as the use of non-contrast chest computed tomography (CT) for evaluation and grading of the associated lung damage. Here we used a deep learning approach for predicting the outcome of 1078 patients admitted into the Baqiyatallah Hospital in Tehran, Iran, suffering from COVID-19 infections in the first wave of the pandemic. These were classified into two groups of non-severe and severe cases according to features on their CT scans with accuracies of approximately 0.90. We suggest that incorporation of molecular and/or clinical features, such as multiplex immunoassay or laboratory findings, will increase accuracy and sensitivity of the model for COVID-19 -related predictions.
|
Methods in molecular biology (Clifton, N.J.)
| 2022-07-16T00:00:00
|
[
"AmirhosseinSahebkar",
"MitraAbbasifard",
"SamiraChaibakhsh",
"Paul CGuest",
"Mohamad AminPourhoseingholi",
"AmirVahedian-Azimi",
"PrashantKesharwani",
"TannazJamialahmadi"
] |
10.1007/978-1-0716-2395-4_30
10.1039/D0LC01156H
10.1016/j.cie.2021.107235
10.1016/S0140-6736(20)30360-3
10.7150/ijms.50568
10.1186/s12879-021-06528-3
10.1148/radiol.2020200330
10.1148/radiol.2020200343
10.1007/978-3-030-59261-5_24
10.2214/AJR.20.22975
10.1148/radiol.2020200230
10.1021/acsnano.0c02624
10.5114/pjr.2020.98009
10.1136/bmjhci-2021-100389
10.1148/ryct.2021200510
10.1038/s41598-021-99015-3
10.3390/diagnostics11091712
10.1007/978-3-030-71697-4_11
10.1148/radiol.2462070712
10.1186/s12879-019-4592-0
10.1148/radiol.2363040958
10.1016/j.ejrad.2021.109583
10.1038/s41467-020-20657-4
10.3348/kjr.2020.0994
10.1016/j.compbiomed.2021.104304
10.3348/kjr.2020.1104
10.1038/s41598-021-93719-2
|
Feature-level ensemble approach for COVID-19 detection using chest X-ray images.
|
Severe acute respiratory syndrome coronavirus 2 (SARS CoV-2), also known as the coronavirus disease 2019 (COVID-19), has threatened many human beings around the world and capsized economies at unprecedented magnitudes. Therefore, the detection of this disease using chest X-ray modalities has played a pivotal role in producing fast and accurate medical diagnoses, especially in countries that are unable to afford laboratory testing kits. However, identifying and distinguishing COVID-19 from virtually similar thoracic abnormalities utilizing medical images is challenging because it is time-consuming, demanding, and susceptible to human-based errors. Therefore, artificial-intelligence-driven automated diagnoses, which excludes direct human intervention, may potentially be used to achieve consistently accurate performances. In this study, we aimed to (i) obtain a customized dataset composed of a relatively small number of images collected from publicly available datasets; (ii) present the efficient integration of the shallow handcrafted features obtained from local descriptors, radiomics features specialized for medical images, and deep features aggregated from pre-trained deep learning architectures; and (iii) distinguish COVID-19 patients from healthy controls and pneumonia patients using a collection of conventional machine learning classifiers. By conducting extensive experiments, we demonstrated that the feature-based ensemble approach provided the best classification metrics, and this approach explicitly outperformed schemes that used only either local, radiomic, or deep features. In addition, our proposed method achieved state-of-the-art multi-class classification results compared to the baseline reference for the currently available COVID-19 datasets.
|
PloS one
| 2022-07-15T00:00:00
|
[
"Thi Kieu KhanhHo",
"JeonghwanGwak"
] |
10.1371/journal.pone.0268430
10.1056/NEJMoa2002032
10.1128/JCM.00512-20
10.1093/cid/ciaa310
10.1002/jmv.26699
10.1038/nature21056
10.1002/jmri.26534
10.1016/j.bspc.2019.101678
10.1016/j.compmedimag.2019.101673
10.1016/j.eswa.2018.04.021
10.1109/ACCESS.2019.2900127
10.1148/radiol.2019182716
10.1016/j.media.2018.03.006
10.3390/app9194130
10.1080/07391102.2020.1767212
10.1007/s40846-020-00529-4
10.1007/s13246-020-00865-4
10.1109/TMI.2020.2993291
10.3390/s18030699
10.1109/ACCESS.2019.2917266
10.1109/ACCESS.2019.2922691
10.1109/JBHI.2017.2775662
10.1016/j.scs.2020.102589
10.1016/j.eswa.2020.114054
10.1007/s10489-020-01829-7
10.3390/electronics9091388
10.1016/j.mehy.2020.109761
10.1007/s10489-020-01900-3
10.1038/s41598-020-76550-z
10.1016/j.compbiomed.2020.103792
10.1023/A:1011139631724
10.1038/ncomms5006
10.1158/1078-0432.CCR-14-0990
10.1007/s10115-006-0013-y
10.1016/j.patcog.2006.12.019
10.1016/j.jneumeth.2015.09.019
10.1016/j.patcog.2006.06.008
10.4310/SII.2009.v2.n3.a8
10.1016/j.neuroimage.2012.09.065
10.1016/j.engappai.2015.04.003
10.7717/peerj-cs.551
10.1016/j.isatra.2022.02.033
10.1117/1.JMI.4.4.041305
|
Classification of COVID-19 from tuberculosis and pneumonia using deep learning techniques.
|
Deep learning provides the healthcare industry with the ability to analyse data at exceptional speeds without compromising on accuracy. These techniques are applicable to healthcare domain for accurate and timely prediction. Convolutional neural network is a class of deep learning methods which has become dominant in various computer vision tasks and is attracting interest across a variety of domains, including radiology. Lung diseases such as tuberculosis (TB), bacterial and viral pneumonias, and COVID-19 are not predicted accurately due to availability of very few samples for either of the lung diseases. The disease could be easily diagnosed using X-ray or CT scan images. But the number of images available for each of the disease is not as equally as other resulting in imbalance nature of input data. Conventional supervised machine learning methods do not achieve higher accuracy when trained using a lesser amount of COVID-19 data samples. Image data augmentation is a technique that can be used to artificially expand the size of a training dataset by creating modified versions of images in the dataset. Data augmentation helped reduce overfitting when training a deep neural network. The SMOTE (Synthetic Minority Oversampling Technique) algorithm is used for the purpose of balancing the classes. The novelty in this research work is to apply combined data augmentation and class balance techniques before classification of tuberculosis, pneumonia, and COVID-19. The classification accuracy obtained with the proposed multi-level classification after training the model is recorded as 97.4% for TB and pneumonia and 88% for bacterial, viral, and COVID-19 classifications. The proposed multi-level classification method produced is ~8 to ~10% improvement in classification accuracy when compared with the existing methods in this area of research. The results reveal the fact that the proposed system is scalable to growing medical data and classifies lung diseases and its sub-types in less time with higher accuracy.
|
Medical & biological engineering & computing
| 2022-07-15T00:00:00
|
[
"LokeswariVenkataramana",
"D Venkata VaraPrasad",
"SSaraswathi",
"C MMithumary",
"RKarthikeyan",
"NMonika"
] |
10.1007/s11517-022-02632-x
10.1080/01431169508954507
10.1007/s42979-021-00695-5
10.3390/app10093233
10.1016/j.cell.2018.02.010
10.3390/app8101715
10.1111/j.1440-1843.2006.00947.x
10.1007/s40747-020-00199-4
10.1016/j.bbe.2020.08.008
10.1613/jair.953
10.1504/IJKESDP.2011.039875
10.1613/jair.1.11192
10.1016/j.eswa.2021.114986
10.1016/j.knosys.2021.107269
10.1016/j.ins.2021.03.041
10.1109/TMI.2020.2993291
10.1002/ima.22613
10.1016/j.compbiomed.2021.105134
10.3390/app10020559
10.1016/j.patrec.2021.08.018
|
RLMD-PA: A Reinforcement Learning-Based Myocarditis Diagnosis Combined with a Population-Based Algorithm for Pretraining Weights.
|
Myocarditis is heart muscle inflammation that is becoming more prevalent these days, especially with the prevalence of COVID-19. Noninvasive imaging cardiac magnetic resonance (CMR) can be used to diagnose myocarditis, but the interpretation is time-consuming and requires expert physicians. Computer-aided diagnostic systems can facilitate the automatic screening of CMR images for triage. This paper presents an automatic model for myocarditis classification based on a deep reinforcement learning approach called as reinforcement learning-based myocarditis diagnosis combined with population-based algorithm (RLMD-PA) that we evaluated using the Z-Alizadeh Sani myocarditis dataset of CMR images prospectively acquired at Omid Hospital, Tehran. This model addresses the imbalanced classification problem inherent to the CMR dataset and formulates the classification problem as a sequential decision-making process. The policy of architecture is based on convolutional neural network (CNN). To implement this model, we first apply the artificial bee colony (ABC) algorithm to obtain initial values for RLMD-PA weights. Next, the agent receives a sample at each step and classifies it. For each classification act, the agent gets a reward from the environment in which the reward of the minority class is greater than the reward of the majority class. Eventually, the agent finds an optimal policy under the guidance of a particular reward function and a helpful learning environment. Experimental results based on standard performance metrics show that RLMD-PA has achieved high accuracy for myocarditis classification, indicating that the proposed model is suitable for myocarditis diagnosis.
|
Contrast media & molecular imaging
| 2022-07-15T00:00:00
|
[
"Seyed VahidMoravvej",
"RoohallahAlizadehsani",
"SadiaKhanam",
"ZahraSobhaninia",
"AfshinShoeibi",
"FahimeKhozeimeh",
"Zahra AlizadehSani",
"Ru-SanTan",
"AbbasKhosravi",
"SaeidNahavandi",
"Nahrizul AdibKadri",
"Muhammad MokhzainiAzizan",
"NArunkumar",
"U RajendraAcharya"
] |
10.1155/2022/8733632
10.1056/nejmra0800028
10.1016/j.humpath.2005.07.009
10.1007/978-3-030-92238-2_57
10.1007/s00500-014-1334-5
10.1007/s10479-011-0894-3
10.1016/j.knosys.2017.11.029
10.1007/11538059_91
10.1145/1007730.1007735
10.1007/s10489-020-01637-z
10.1109/tkde.2005.95
10.1016/j.knosys.2015.10.012
10.1109/3477.764879
10.1016/j.asoc.2007.05.007
10.1609/aaai.v33i01.33013959
10.1016/j.jcmg.2009.09.023
10.12688/f1000research.14857.1
10.1186/s12968-019-0550-7
10.1161/circimaging.118.007598
10.1007/bf03086308
10.1016/j.acra.2013.01.004
10.1186/s12968-017-0419-6
10.1007/bf00994018
10.1001/jama.2016.7653
10.1007/978-1-4419-9326-7_5
10.1109/72.286925
10.1016/j.advengsoft.2013.12.007
10.1007/978-3-642-12538-6_6
10.1016/j.advengsoft.2016.01.008
|
Non-iterative learning machine for identifying CoViD19 using chest X-ray images.
|
CoViD19 is a novel disease which has created panic worldwide by infecting millions of people around the world. The last significant variant of this virus, called as omicron, contributed to majority of cases in the third wave across globe. Though lesser in severity as compared to its predecessor, the delta variant, this mutation has shown higher communicable rate. This novel virus with symptoms of pneumonia is dangerous as it is communicable and hence, has engulfed entire world in a very short span of time. With the help of machine learning techniques, entire process of detection can be automated so that direct contacts can be avoided. Therefore, in this paper, experimentation is performed on CoViD19 chest X-ray images using higher order statistics with iterative and non-iterative models. Higher order statistics provide a way of analyzing the disturbances in the chest X-ray images. The results obtained are quite good with 96.64% accuracy using a non-iterative model. For fast testing of the patients, non-iterative model is preferred because it has advantage over iterative model in terms of speed. Comparison with some of the available state-of-the-art methods and some iterative methods proves efficacy of the work.
|
Scientific reports
| 2022-07-14T00:00:00
|
[
"SahilDalal",
"Virendra PVishwakarma",
"VarshaSisaudia",
"ParulNarwal"
] |
10.1038/s41598-022-15268-6
10.3346/jkms.2020.35.e150
10.1001/jama.2021.2927
10.1016/j.clinimag.2020.04.010
10.1016/j.jcv.2020.104359
10.1016/j.jcv.2020.104356
10.5582/bst.2020.01047
10.1016/j.ajem.2020.09.032
10.1007/s11046-021-00528-2
10.1007/s13246-020-00865-4
10.1109/JBHI.2020.3037127
10.1007/s10096-020-03901-z
10.1109/TIP.2021.3058783
10.1007/s10489-020-01826-w
10.1007/s12559-020-09787-5
10.1016/j.media.2020.101824
10.1016/j.chaos.2020.110495
10.1007/s10140-020-01886-y
10.1016/j.bspc.2021.102588
10.1016/j.compeleceng.2020.106960
10.1016/j.sysarc.2020.101830
10.1109/ACCESS.2020.3016780
10.1016/j.measurement.2020.108288
10.1016/j.asoc.2020.106912
10.1080/07391102.2020.1788642
10.1016/j.media.2020.101794
10.1007/s00521-020-05437-x
10.1109/5254.708428
10.1016/j.ejor.2017.08.040
10.1049/el.2017.0023
10.1016/j.neucom.2005.12.126
10.1016/j.neucom.2007.02.009
10.1016/j.neucom.2007.10.008
10.1109/TSMCB.2011.2168604
10.1007/s13369-020-04566-8
10.4108/eai.13-7-2018.163575
10.1007/s11042-019-08537-6
10.17148/IARJSET.2016.3119
10.1038/s41598-020-79139-8
10.1016/j.eng.2020.04.010
10.1038/s41598-019-56847-4
|
Detection of COVID-19 using deep learning techniques and classification methods.
|
Since the patient is not quarantined during the conclusion of the Polymerase Chain Reaction (PCR) test used in the diagnosis of COVID-19, the disease continues to spread. In this study, it was aimed to reduce the duration and amount of transmission of the disease by shortening the diagnosis time of COVID-19 patients with the use of Computed Tomography (CT). In addition, it is aimed to provide a decision support system to radiologists in the diagnosis of COVID-19. In this study, deep features were extracted with deep learning models such as ResNet-50, ResNet-101, AlexNet, Vgg-16, Vgg-19, GoogLeNet, SqueezeNet, Xception on 1345 CT images obtained from the radiography database of Siirt Education and Research Hospital. These deep features are given to classification methods such as Support Vector Machine (SVM), k Nearest Neighbor (kNN), Random Forest (RF), Decision Trees (DT), Naive Bayes (NB), and their performance is evaluated with test images. Accuracy value, F1-score and ROC curve were considered as success criteria. According to the data obtained as a result of the application, the best performance was obtained with ResNet-50 and SVM method. The accuracy was 96.296%, the F1-score was 95.868%, and the AUC value was 0.9821. The deep learning model and classification method examined in this study and found to be high performance can be used as an auxiliary decision support system by preventing unnecessary tests for COVID-19 disease.
|
Information processing & management
| 2022-07-14T00:00:00
|
[
"ÇinareOğuz",
"MeteYağanoğlu"
] |
10.1016/j.ipm.2022.103025
10.1101/2020.03.12.20027185
|
Computational Intelligence-Based Method for Automated Identification of COVID-19 and Pneumonia by Utilizing CXR Scans.
|
Chest X-ray (CXR) scans are emerging as an important diagnostic tool for the early spotting of COVID and other significant lung diseases. The recognition of visual symptoms is difficult and can take longer time by radiologists as CXR provides various signs of viral infection. Therefore, artificial intelligence-based method for automated identification of COVID by utilizing X-ray images has been found to be very promising. In the era of deep learning, effective utilization of existing pretrained generalized models is playing a decisive role in terms of time and accuracy. In this paper, the benefits of weights of existing pretrained model VGG16 and InceptionV3 have been taken. Base model has been created using pretrained models (VGG16 and InceptionV3). The last fully connected (FC) layer has been added as per the number of classes for classification of CXR in binary and multi-class classification by appropriately using transfer learning. Finally, combination of layers is made by integrating the FC layer weights of both the models (VGG16 and InceptionV3). The image dataset used for experimentation consists of healthy, COVID, pneumonia viral, and pneumonia bacterial. The proposed weight fusion method has outperformed the existing models in terms of accuracy, achieved 99.5% accuracy in binary classification over 20 epochs, and 98.2% accuracy in three-class classification over 100 epochs.
|
Computational intelligence and neuroscience
| 2022-07-09T00:00:00
|
[
"BhavanaKaushik",
"DeepikaKoundal",
"NeelamGoel",
"AtefZaguia",
"AssayeBelay",
"HamzaTurabieh"
] |
10.1155/2022/7124199
10.1016/j.ijid.2020.01.009
10.4018/ijehmc.20220701.oa4
10.1111/exsy.12749
10.1109/jiot.2018.2802898
10.1109/tkde.2009.191
10.1186/s40537-016-0043-6
10.1109/access.2018.2845399
10.1016/j.eng.2020.04.010
10.1080/07391102.2020.1767212
10.1016/j.compbiomed.2020.103792
10.1109/access.2020.2994762
10.1016/j.mehy.2020.109761
10.1109/tmi.2020.2993291
10.1007/s40846-020-00529-4
10.3390/app10020559
10.1016/j.measurement.2019.05.076
10.1155/2020/8828855
10.1016/j.cmpb.2020.105532
10.1016/j.imu.2020.100360
10.1016/j.imu.2020.100412
10.3233/xst-200720
10.1016/j.asoc.2020.106580
10.1016/j.chaos.2020.110071
10.3390/life11111118
10.1007/s10489-020-01826-w
10.1109/access.2021.3095312
10.1109/ssci50451.2021.9659919
10.1155/2021/6919483
10.1155/2021/8828404
10.1109/CVPR.2016.308
10.1016/j.jpha.2020.03.001
10.3238/arztebl.2014.0181
10.1016/j.chaos.2020.109947
10.1038/s41598-020-74539-2
10.1155/2020/8889023
10.1109/iceiec49280.2020.9152329
10.1016/j.compbiomed.2020.103869
10.1007/978-3-030-01424-7_27
10.3906/elk-2105-243
10.1016/j.compeleceng.2022.108028
10.1155/2022/8549707
10.3390/s22062278
|
PCA-Based Incremental Extreme Learning Machine (PCA-IELM) for COVID-19 Patient Diagnosis Using Chest X-Ray Images.
|
Novel coronavirus 2019 has created a pandemic and was first reported in December 2019. It has had very adverse consequences on people's daily life, healthcare, and the world's economy as well. According to the World Health Organization's most recent statistics, COVID-19 has become a worldwide pandemic, and the number of infected persons and fatalities growing at an alarming rate. It is highly required to have an effective system to early detect the COVID-19 patients to curb the further spreading of the virus from the affected person. Therefore, to early identify positive cases in patients and to support radiologists in the automatic diagnosis of COVID-19 from X-ray images, a novel method PCA-IELM is proposed based on principal component analysis (PCA) and incremental extreme learning machine. The suggested method's key addition is that it considers the benefits of PCA and the incremental extreme learning machine. Further, our strategy PCA-IELM reduces the input dimension by extracting the most important information from an image. Consequently, the technique can effectively increase the COVID-19 patient prediction performance. In addition to these, PCA-IELM has a faster training speed than a multi-layer neural network. The proposed approach was tested on a COVID-19 patient's chest X-ray image dataset. The experimental results indicate that the proposed approach PCA-IELM outperforms PCA-SVM and PCA-ELM in terms of accuracy (98.11%), precision (96.11%), recall (97.50%), F1-score (98.50%), etc., and training speed.
|
Computational intelligence and neuroscience
| 2022-07-09T00:00:00
|
[
"VinodKumar",
"SougatamoyBiswas",
"Dharmendra SinghRajput",
"HarshitaPatel",
"BasantTiwari"
] |
10.1155/2022/9107430
10.1016/j.neucom.2012.02.042
10.1007/s00521-014-1567-3
10.1016/j.jvcir.2019.05.016
10.1007/s00521-020-05204-y
10.1007/s10586-021-03282-8
10.1007/s11063-012-9253-x
10.3390/sym11010001
10.1007/s12559-014-9259-y
10.1016/j.neucom.2007.02.009
10.1109/UT.2017.7890275
10.1109/tgrs.2017.2743102
10.1016/j.asoc.2021.107482
10.1016/j.cageo.2021.104877
10.1007/s13042-015-0419-5
10.1016/j.neucom.2005.12.126
10.1007/s13042-020-01232-1
10.1016/j.neucom.2007.07.025
10.1016/j.eswa.2016.08.052
10.3390/en12071223
10.1504/ijmso.2018.096451
10.1007/s11042-018-7093-z
10.1080/07391102.2022.2034668
10.1016/B978-0-12-816718-2.00016-6
10.1080/10798587.2017.1316071
10.1016/j.scs.2022.103713
10.1155/2016/7293278
10.1016/j.engappai.2018.07.002
10.1007/s13042-019-01001-9
10.1007/s11694-008-9043-3
10.1016/j.neucom.2020.04.098
10.1080/21642583.2020.1759156
10.32604/cmc.2021.016957
10.1007/s11042-019-07978-3
10.1016/j.compbiomed.2020.103792
10.1007/s13246-020-00865-4
10.3389/fmed.2020.608525
10.33889/ijmems.2020.5.4.052
10.48550/arXiv.2003.11055
10.48550/arXiv.2110.04160
10.1101/2020.02.23.20026930
10.1007/s00330-021-07715-1
10.1016/j.scs.2020.102589
10.1016/j.eng.2020.04.010
|
CAD systems for COVID-19 diagnosis and disease stage classification by segmentation of infected regions from CT images.
|
Here propose a computer-aided diagnosis (CAD) system to differentiate COVID-19 (the coronavirus disease of 2019) patients from normal cases, as well as to perform infection region segmentation along with infection severity estimation using computed tomography (CT) images. The developed system facilitates timely administration of appropriate treatment by identifying the disease stage without reliance on medical professionals. So far, this developed model gives the most accurate, fully automatic COVID-19 real-time CAD framework.
The CT image dataset of COVID-19 and non-COVID-19 individuals were subjected to conventional ML stages to perform binary classification. In the feature extraction stage, SIFT, SURF, ORB image descriptors and bag of features technique were implemented for the appropriate differentiation of chest CT regions affected with COVID-19 from normal cases. This is the first work introducing this concept for COVID-19 diagnosis application. The preferred diverse database and selected features that are invariant to scale, rotation, distortion, noise etc. make this framework real-time applicable. Also, this fully automatic approach which is faster compared to existing models helps to incorporate it into CAD systems. The severity score was measured based on the infected regions along the lung field. Infected regions were segmented through a three-class semantic segmentation of the lung CT image. Using severity score, the disease stages were classified as mild if the lesion area covers less than 25% of the lung area; moderate if 25-50% and severe if greater than 50%. Our proposed model resulted in classification accuracy of 99.7% with a PNN classifier, along with area under the curve (AUC) of 0.9988, 99.6% sensitivity, 99.9% specificity and a misclassification rate of 0.0027. The developed infected region segmentation model gave 99.47% global accuracy, 94.04% mean accuracy, 0.8968 mean IoU (intersection over union), 0.9899 weighted IoU, and a mean Boundary F1 (BF) contour matching score of 0.9453, using Deepabv3+ with its weights initialized using ResNet-50.
The developed CAD system model is able to perform fully automatic and accurate diagnosis of COVID-19 along with infected region extraction and disease stage identification. The ORB image descriptor with bag of features technique and PNN classifier achieved the superior classification performance.
|
BMC bioinformatics
| 2022-07-07T00:00:00
|
[
"Mohammad HAlshayeji",
"SilpaChandraBhasi Sindhu",
"Sa'edAbed"
] |
10.1186/s12859-022-04818-4
10.1186/s12859-021-04083-x
10.1016/j.patrec.2020.10.001
10.1007/978-3-030-01234-2_49
10.1007/s11042-022-12608-6
10.1016/j.imu.2020.100427
10.1007/s10916-020-01562-1
10.1038/s41598-020-79139-8
10.1038/s41598-019-56847-4
10.1016/j.bbe.2021.05.013
10.1038/s41598-020-79139-8
10.1371/journal.pone.0235187
10.1038/s41598-020-79139-8
10.1038/s41591-020-0931-3
10.3390/diagnostics10110901
10.1109/TMI.2020.2996645
10.1016/j.eswa.2021.114848
10.1371/journal.pone.0252384.t001
10.1371/journal.pone.0236618
10.3389/fmed.2020.557453
10.1007/s00330-020-07033-y
10.1016/j.cell.2020.04.045
10.1007/s12530-021-09386-1
10.1007/s00371-020-01814-8
10.1017/S1431927621001653
10.1007/978-981-15-5697-5_11
10.1007/s11760-020-01759-4
10.3390/bdcc5040053
10.1007/s10489-021-02731-6
10.1007/s00330-021-08049-8
10.3389/fmed.2020.608525
10.1117/1.JMI.8.S1.017502.full
|
FWLICM-Deep Learning: Fuzzy Weighted Local Information C-Means Clustering-Based Lung Lobe Segmentation with Deep Learning for COVID-19 Detection.
|
Coronavirus (COVID-19) creates an extensive range of respiratory contagions, and it is a kind of ribonucleic acid (RNA) virus, which affects both animals and humans. Moreover, COVID-19 is a new disease, which produces contamination in upper respiration alterritory and lungs. The new COVID is a rapidly spreading pathogen globally, and it threatens billions of humans' lives. However, it is significant to identify positive cases in order to avoid the spread of plague and to speedily treat infected patients. Hence, in this paper, the WSCA-based RMDL approach is devised for COVID-19 prediction by means of chest X-ray images. Moreover, Fuzzy Weighted Local Information C-Means (FWLICM) approach is devised in order to segment lung lobes. The developed FWLICM method is designed by modifying the Fuzzy Local Information C-Means (FLICM) technique. Additionally, random multimodel deep learning (RMDL) classifier is utilized for the COVID-19 prediction process. The new optimization approach, named water sine cosine algorithm (WSCA), is devised in order to obtain an effective prediction. The developed WSCA is newly designed by incorporating sine cosine algorithm (SCA) and water cycle algorithm (WCA). The developed WSCA-driven RMDL approach outperforms other COVID-19 prediction techniques with regard to accuracy, specificity, sensitivity, and dice score of 92.41%, 93.55%, 92.14%, and 90.02%.
|
Journal of digital imaging
| 2022-07-06T00:00:00
|
[
"RRajeswari",
"VeerrajuGampala",
"BalajeeMaram",
"RCristin"
] |
10.1007/s10278-022-00667-y
10.1016/j.compbiomed.2020.103805
10.1016/j.cmpb.2020.105581
10.1016/j.compbiomed.2020.103792
10.1056/NEJMc2001468
10.1007/s12098-020-03263-6
10.1016/S0140-6736(20)30522-5
10.1038/s41368-020-0075-9
10.1148/radiol.2020200490
10.32098/mltj.01.2016.06
10.1053/j.jfas.2020.11.003
10.1016/j.media.2017.07.005
10.1109/ACCESS.2017.2788044
10.1016/j.cmpb.2018.04.005
10.1007/s10462-018-9641-3
10.1016/j.measurement.2019.05.076
10.46253/j.mr.v3i2.a4
10.1080/14737159.2020.1757437
10.1109/TIP.2010.2040763
10.1016/j.compstruc.2012.07.010
10.1016/j.knosys.2015.12.022
10.1016/j.eswa.2020.114054
10.1007/s13246-020-00952-6
|
Self-evolving vision transformer for chest X-ray diagnosis through knowledge distillation.
|
Although deep learning-based computer-aided diagnosis systems have recently achieved expert-level performance, developing a robust model requires large, high-quality data with annotations that are expensive to obtain. This situation poses a conundrum that annually-collected chest x-rays cannot be utilized due to the absence of labels, especially in deprived areas. In this study, we present a framework named distillation for self-supervision and self-train learning (DISTL) inspired by the learning process of the radiologists, which can improve the performance of vision transformer simultaneously with self-supervision and self-training through knowledge distillation. In external validation from three hospitals for diagnosis of tuberculosis, pneumothorax, and COVID-19, DISTL offers gradually improved performance as the amount of unlabeled data increase, even better than the fully supervised model with the same amount of labeled data. We additionally show that the model obtained with DISTL is robust to various real-world nuisances, offering better applicability in clinical setting.
|
Nature communications
| 2022-07-06T00:00:00
|
[
"SangjoonPark",
"GwanghyunKim",
"YujinOh",
"Joon BeomSeo",
"Sang MinLee",
"Jin HwanKim",
"SungjunMoon",
"Jae-KwangLim",
"Chang MinPark",
"Jong ChulYe"
] |
10.1038/s41467-022-31514-x
10.1001/jama.2016.17216
10.1038/s41591-018-0107-6
10.1038/s41591-018-0029-3
10.1016/j.jacr.2017.12.028
10.1186/s41747-018-0061-6
10.1148/radiol.2017162326
10.1038/s41598-019-42557-4
10.1371/journal.pone.0221339
10.1016/S2589-7500(21)00116-3
10.2174/1573405617666210127154257
10.1016/j.media.2019.101539
10.1109/TMI.2020.2995518
10.1016/j.media.2020.101797
|
Development and validation of bone-suppressed deep learning classification of COVID-19 presentation in chest radiographs.
|
Coronavirus disease 2019 (COVID-19) is a pandemic disease. Fast and accurate diagnosis of COVID-19 from chest radiography may enable more efficient allocation of scarce medical resources and hence improved patient outcomes. Deep learning classification of chest radiographs may be a plausible step towards this. We hypothesize that bone suppression of chest radiographs may improve the performance of deep learning classification of COVID-19 phenomena in chest radiographs.
Two bone suppression methods (Gusarev
Bone suppression of external test data was found to significantly (P<0.05) improve AUC for one classifier architecture [from 0.698 (non-suppressed) to 0.732 (Rajaraman-suppressed)]. For the other classifier architecture, suppression did not significantly (P>0.05) improve or worsen classifier performance.
Rajaraman suppression significantly improved classification performance in one classification architecture, and did not significantly worsen classifier performance in the other classifier architecture. This research could be extended to explore the impact of bone suppression on classification of different lung pathologies, and the effect of other image enhancement techniques on classifier performance.
|
Quantitative imaging in medicine and surgery
| 2022-07-06T00:00:00
|
[
"Ngo Fung DanielLam",
"HongfeiSun",
"LimingSong",
"DongrongYang",
"ShaohuaZhi",
"GeRen",
"Pak HeiChou",
"Shiu Bun NelsonWan",
"Man Fung EstherWong",
"King KwongChan",
"Hoi Ching HaileyTsang",
"Feng-Ming SpringKong",
"Yì Xiáng JWáng",
"JingQin",
"Lawrence Wing ChiChan",
"MichaelYing",
"JingCai"
] |
10.21037/qims-21-791
10.1016/S0140-6736(20)30183-5
10.1016/S2213-2600(20)30076-X
10.2807/1560-7917.ES.2021.26.24.2100509
10.1016/S0140-6736(21)01358-1
10.1056/NEJMoa2002032
10.1148/radiol.2020200642
10.1016/j.radi.2020.10.018
10.1007/s11263-015-0816-y
10.1155/2018/7068349
10.1038/s41598-020-76550-z
10.1007/s10489-020-01829-7
10.1007/s10489-020-01829-7
10.1097/RLI.0000000000000748
10.3348/kjr.2020.0536
10.1016/j.patrec.2021.06.021
10.3390/diagnostics11050840
10.21037/qims-20-1230
10.1002/mp.14371
10.1007/s11548-015-1278-y
10.1109/TMI.2006.871549
10.1016/j.ejrad.2009.03.046
10.1148/radiol.11100153
10.2214/ajr.174.1.1740071
10.7937/91ah-v663
10.1148/radiol.2021203957
10.1148/ryai.2019180041
10.1109/TIP.2003.819861
10.1016/j.compbiomed.2021.104319
10.1038/s42256-021-00307-0
10.1038/s41598-020-74539-2
10.2307/2531595
10.1371/journal.pone.0254809
10.15585/mmwr.mm7015e2
10.1038/s41598-021-83237-6
|
Multi-branch fusion auxiliary learning for the detection of pneumonia from chest X-ray images.
|
Lung infections caused by bacteria and viruses are infectious and require timely screening and isolation, and different types of pneumonia require different treatment plans. Therefore, finding a rapid and accurate screening method for lung infections is critical. To achieve this goal, we proposed a multi-branch fusion auxiliary learning (MBFAL) method for pneumonia detection from chest X-ray (CXR) images. The MBFAL method was used to perform two tasks through a double-branch network. The first task was to recognize the absence of pneumonia (normal), COVID-19, other viral pneumonia and bacterial pneumonia from CXR images, and the second task was to recognize the three types of pneumonia from CXR images. The latter task was used to assist the learning of the former task to achieve a better recognition effect. In the process of auxiliary parameter updating, the feature maps of different branches were fused after sample screening through label information to enhance the model's ability to recognize case of pneumonia without impacting its ability to recognize normal cases. Experiments show that an average classification accuracy of 95.61% is achieved using MBFAL. The single class accuracy for normal, COVID-19, other viral pneumonia and bacterial pneumonia was 98.70%, 99.10%, 96.60% and 96.80%, respectively, and the recall was 97.20%, 98.60%, 96.10% and 89.20%, respectively, using the MBFAL method. Compared with the baseline model and the model constructed using the above methods separately, better results for the rapid screening of pneumonia were achieved using MBFAL.
|
Computers in biology and medicine
| 2022-07-03T00:00:00
|
[
"JiaLiu",
"JingQi",
"WeiChen",
"YongjianNian"
] |
10.1016/j.compbiomed.2022.105732
10.1109/TMI.2020.3040950
10.1128/jcm.02589-20
10.1186/s13054-015-1083-6
10.1002/jmv.25674
10.2807/1560-7917.es.2020.25.3.2000045
10.1148/radiol.2020200432
10.1148/radiol.2020200241
10.1016/j.chest.2020.04.003
10.1148/radiol.2020200343
10.1016/j.clinimag.2020.11.004
10.1148/rg.2018170048
10.1148/ryct.2020200034
10.1109/ICESC51422.2021.9532943
10.1016/j.compmedimag.2016.11.004
10.1016/j.crad.2018.12.015
10.1109/TMI.2020.2994908
10.1109/TPAMI.2021.3054719
10.1109/CVPR.2019.00197
10.1109/CVPR.2019.00332
10.1023/A:1007379606734
10.1109/ICCV.2019.00649
10.1007/s13246-020-00865-4
10.1038/s41598-020-76550-z
10.1007/s00500-020-05424-3
10.1109/ICCC51575.2020.9345005
10.1016/j.asoc.2020.106744
10.1016/j.ins.2020.09.041
10.1016/j.neucom.2022.01.055
10.1109/ICCV.2017.89
10.1109/CVPR.2018.00813
10.1109/TNNLS.2021.3114747
10.1109/TMI.2019.2893944
10.1109/ICCV.2017.324
10.1109/ACCESS.2020.3010287
10.1016/j.compbiomed.2021.104319
10.1109/WACV.2018.00097
10.1109/TCBB.2021.3065361
10.1007/978-3-319-23344-4_37
10.1007/978-3-319-23117-4_43
|
Multi-center validation of an artificial intelligence system for detection of COVID-19 on chest radiographs in symptomatic patients.
|
While chest radiograph (CXR) is the first-line imaging investigation in patients with respiratory symptoms, differentiating COVID-19 from other respiratory infections on CXR remains challenging. We developed and validated an AI system for COVID-19 detection on presenting CXR.
A deep learning model (RadGenX), trained on 168,850 CXRs, was validated on a large international test set of presenting CXRs of symptomatic patients from 9 study sites (US, Italy, and Hong Kong SAR) and 2 public datasets from the US and Europe. Performance was measured by area under the receiver operator characteristic curve (AUC). Bootstrapped simulations were performed to assess performance across a range of potential COVID-19 disease prevalence values (3.33 to 33.3%). Comparison against international radiologists was performed on an independent test set of 852 cases.
RadGenX achieved an AUC of 0.89 on 4-fold cross-validation and an AUC of 0.79 (95%CI 0.78-0.80) on an independent test cohort of 5,894 patients. Delong's test showed statistical differences in model performance across patients from different regions (p < 0.01), disease severity (p < 0.001), gender (p < 0.001), and age (p = 0.03). Prevalence simulations showed the negative predictive value increases from 86.1% at 33.3% prevalence, to greater than 98.5% at any prevalence below 4.5%. Compared with radiologists, McNemar's test showed the model has higher sensitivity (p < 0.001) but lower specificity (p < 0.001).
An AI model that predicts COVID-19 infection on CXR in symptomatic patients was validated on a large international cohort providing valuable context on testing and performance expectations for AI systems that perform COVID-19 prediction on CXR.
• An AI model developed using CXRs to detect COVID-19 was validated in a large multi-center cohort of 5,894 patients from 9 prospectively recruited sites and 2 public datasets. • Differences in AI model performance were seen across region, disease severity, gender, and age. • Prevalence simulations on the international test set demonstrate the model's NPV is greater than 98.5% at any prevalence below 4.5%.
|
European radiology
| 2022-07-03T00:00:00
|
[
"Michael DKuo",
"Keith W HChiu",
"David SWang",
"Anna RitaLarici",
"DmytroPoplavskiy",
"AdeleValentini",
"AlessandroNapoli",
"AndreaBorghesi",
"GuidoLigabue",
"Xin Hao BFang",
"Hing Ki CWong",
"SailongZhang",
"John RHunter",
"AbeerMousa",
"AmatoInfante",
"LorenzoElia",
"SalvatoreGolemi",
"Leung Ho PYu",
"Christopher K MHui",
"Bradley JErickson"
] |
10.1007/s00330-022-08969-z
10.1016/S1473-3099(20)30457-6
10.1001/jamanetworkopen.2020.37067
10.1016/S2468-2667(20)30308-X
10.1056/NEJMp2025631
10.1148/radiol.2020200432
10.1148/radiol.2020201365
10.1038/s41598-020-76550-z
10.1148/radiol.2020203511
10.1109/TMI.2020.2993291
10.1016/j.patrec.2020.09.010
10.1155/2020/8889023
10.1109/JBHI.2020.3037127
10.1148/ryai.2020200029
10.1016/j.media.2021.102225
10.1148/radiol.2020200038
10.1038/s41597-020-00741-6
10.1038/s42256-021-00307-0
10.1371/journal.pmed.1002683
10.1038/s41467-020-19802-w
10.1016/S2213-2600(21)00005-9
10.1016/S2666-5247(20)30200-7
10.1126/sciadv.abd5393
|
Explainable deep learning algorithm for distinguishing incomplete Kawasaki disease by coronary artery lesions on echocardiographic imaging.
|
Incomplete Kawasaki disease (KD) has often been misdiagnosed due to a lack of the clinical manifestations of classic KD. However, it is associated with a markedly higher prevalence of coronary artery lesions. Identifying coronary artery lesions by echocardiography is important for the timely diagnosis of and favorable outcomes in KD. Moreover, similar to KD, coronavirus disease 2019, currently causing a worldwide pandemic, also manifests with fever; therefore, it is crucial at this moment that KD should be distinguished clearly among the febrile diseases in children. In this study, we aimed to validate a deep learning algorithm for classification of KD and other acute febrile diseases.
We obtained coronary artery images by echocardiography of children (n = 138 for KD; n = 65 for pneumonia). We trained six deep learning networks (VGG19, Xception, ResNet50, ResNext50, SE-ResNet50, and SE-ResNext50) using the collected data.
SE-ResNext50 showed the best performance in terms of accuracy, specificity, and precision in the classification. SE-ResNext50 offered a precision of 81.12%, a sensitivity of 84.06%, and a specificity of 58.46%.
The results of our study suggested that deep learning algorithms have similar performance to an experienced cardiologist in detecting coronary artery lesions to facilitate the diagnosis of KD.
|
Computer methods and programs in biomedicine
| 2022-07-01T00:00:00
|
[
"HaeyunLee",
"YongsoonEun",
"Jae YounHwang",
"Lucy YoungminEun"
] |
10.1016/j.cmpb.2022.106970
|
CVD-HNet: Classifying Pneumonia and COVID-19 in Chest X-ray Images Using Deep Network.
|
The use of computer-assisted analysis to improve image interpretation has been a long-standing challenge in the medical imaging industry. In terms of image comprehension, Continuous advances in AI (Artificial Intelligence), predominantly in DL (Deep Learning) techniques, are supporting in the classification, Detection, and quantification of anomalies in medical images. DL techniques are the most rapidly evolving branch of AI, and it's recently been successfully pragmatic in a variety of fields, including medicine. This paper provides a classification method for COVID 19 infected X-ray images based on new novel deep CNN model. For COVID19 specified pneumonia analysis, two new customized CNN architectures, CVD-HNet1 (COVID-HybridNetwork1) and CVD-HNet2 (COVID-HybridNetwork2), have been designed. The suggested method utilizes operations based on boundaries and regions, as well as convolution processes, in a systematic manner. In comparison to existing CNNs, the suggested classification method achieves excellent Accuracy 98 percent, F Score 0.99 and MCC 0.97. These results indicate impressive classification accuracy on a limited dataset, with more training examples, much better results can be achieved. Overall, our CVD-HNet model could be a useful tool for radiologists in diagnosing and detecting COVID 19 instances early.
|
Wireless personal communications
| 2022-06-28T00:00:00
|
[
"SSuganyadevi",
"VSeethalakshmi"
] |
10.1007/s11277-022-09864-y
10.1183/13993003.00775-2020
10.1038/s41598-020-76282-0
10.1109/ACCESS.2020.3005510
10.1007/s13246-020-00865-4
10.1007/s40846-020-00529-4
10.1016/j.cmpb.2020.105581
10.1109/TMI.2020.2993291
10.1016/j.cmpb.2020.105608
10.1016/j.pdpdt.2021.102473
10.1038/s42003-020-01535-7
10.1109/RBME.2020.2990959
10.1016/j.mri.2021.03.005
10.1016/j.clinimag.2020.09.002
10.1155/2021/6621607
10.1007/s10044-021-00970-4
10.2196/23693
10.1007/s12530-021-09385-2
10.1038/s42256-021-00307-0
10.1016/j.imu.2020.100427
10.1371/journal.pone.0250688
10.1007/s10140-020-01886-y
10.1088/2632-2153/abf22c
10.1371/journal.pone.0235187
10.1038/s41591-020-0931-3
10.1080/21681163.2015.1135299
10.1097/RLI.0000000000000341
10.1109/TMI.2015.2508280
10.1016/j.media.2016.05.004
10.4103/2153-3539.186902
10.1007/s11042-020-09981-5
10.1109/JBHI.2016.2631401
10.1007/978-3-319-0443-0_39
10.1007/s11277-021-09031-9.(IF-1.671)
10.1109/TMI.2016.2521800
10.1109/TMI.2016.2548501
10.1038/srep38897
10.3389/fnins.2014.00229
10.1007/s13735-021-00218-1
10.1007/s10278-016-9914-9
10.1109/JBHI.2016.2636665
10.1118/1.4967345
10.1002/9781119792253.ch8
10.1080/03772063.2021.1893231
10.1007/s11517-016-1590-x
10.3389/fonc.2020.01621
|
Learning deep neural networks' architectures using differential evolution. Case study: Medical imaging processing.
|
The COVID-19 pandemic has changed the way we practice medicine. Cancer patient and obstetric care landscapes have been distorted. Delaying cancer diagnosis or maternal-fetal monitoring increased the number of preventable deaths or pregnancy complications. One solution is using Artificial Intelligence to help the medical personnel establish the diagnosis in a faster and more accurate manner. Deep learning is the state-of-the-art solution for image classification. Researchers manually design the structure of fix deep learning neural networks structures and afterwards verify their performance. The goal of this paper is to propose a potential method for learning deep network architectures automatically. As the number of networks architectures increases exponentially with the number of convolutional layers in the network, we propose a differential evolution algorithm to traverse the search space. At first, we propose a way to encode the network structure as a candidate solution of fixed-length integer array, followed by the initialization of differential evolution method. A set of random individuals is generated, followed by mutation, recombination, and selection. At each generation the individuals with the poorest loss values are eliminated and replaced with more competitive individuals. The model has been tested on three cancer datasets containing MRI scans and histopathological images and two maternal-fetal screening ultrasound images. The novel proposed method has been compared and statistically benchmarked to four state-of-the-art deep learning networks: VGG16, ResNet50, Inception V3, and DenseNet169. The experimental results showed that the model is competitive to other state-of-the-art models, obtaining accuracies between 78.73% and 99.50% depending on the dataset it had been applied on.
|
Computers in biology and medicine
| 2022-06-26T00:00:00
|
[
"SmarandaBelciug"
] |
10.1016/j.compbiomed.2022.105623
10.1159/000508254
10.3390/jcm9113749
10.1016/S2214-109X(21)00079-6
10.1111/jgh15325
10.1136/bmjpo-2020-000859
10.5281/zenodo.3904280
10.1002/uog.20945
10.1002/uog.20796
10.1109/ISBI.2019.8759377
10.1103/PhysRevE.101.052604
10.1038/s41467-021-26568-2
10.1146/annurev-conmatphys-031119-050745
10.1038/s42256-018-0006-z
10.1109/TAI.2021.3067574
10.34740/Kaggle/dsv/1183165
10.1080/00949655.2010.520163
10.3390/s21062222
10.1101/2020.08.15.20175760
10.10138/s41598-020-67076-5
|
Eight pruning deep learning models for low storage and high-speed COVID-19 computed tomography lung segmentation and heatmap-based lesion localization: A multicenter study using COVLIAS 2.0.
|
COVLIAS 1.0: an automated lung segmentation was designed for COVID-19 diagnosis. It has issues related to storage space and speed. This study shows that COVLIAS 2.0 uses pruned AI (PAI) networks for improving both storage and speed, wiliest high performance on lung segmentation and lesion localization.
ology: The proposed study uses multicenter ∼9,000 CT slices from two different nations, namely, CroMed from Croatia (80 patients, experimental data), and NovMed from Italy (72 patients, validation data). We hypothesize that by using pruning and evolutionary optimization algorithms, the size of the AI models can be reduced significantly, ensuring optimal performance. Eight different pruning techniques (i) differential evolution (DE), (ii) genetic algorithm (GA), (iii) particle swarm optimization algorithm (PSO), and (iv) whale optimization algorithm (WO) in two deep learning frameworks (i) Fully connected network (FCN) and (ii) SegNet were designed. COVLIAS 2.0 was validated using "Unseen NovMed" and benchmarked against MedSeg. Statistical tests for stability and reliability were also conducted.
Pruning algorithms (i) FCN-DE, (ii) FCN-GA, (iii) FCN-PSO, and (iv) FCN-WO showed improvement in storage by 92.4%, 95.3%, 98.7%, and 99.8% respectively when compared against solo FCN, and (v) SegNet-DE, (vi) SegNet-GA, (vii) SegNet-PSO, and (viii) SegNet-WO showed improvement by 97.1%, 97.9%, 98.8%, and 99.2% respectively when compared against solo SegNet. AUC > 0.94 (p < 0.0001) on CroMed and > 0.86 (p < 0.0001) on NovMed data set for all eight EA model. PAI <0.25 s per image. DenseNet-121-based Grad-CAM heatmaps showed validation on glass ground opacity lesions.
Eight PAI networks that were successfully validated are five times faster, storage efficient, and could be used in clinical settings.
|
Computers in biology and medicine
| 2022-06-26T00:00:00
|
[
"MohitAgarwal",
"SushantAgarwal",
"LucaSaba",
"Gian LucaChabert",
"SuneetGupta",
"AlessandroCarriero",
"AlessioPasche",
"PietroDanna",
"ArminMehmedovic",
"GavinoFaa",
"SaurabhShrivastava",
"KanishkaJain",
"HarshJain",
"TanayJujaray",
"Inder MSingh",
"MonikaTurk",
"Paramjit SChadha",
"Amer MJohri",
"Narendra NKhanna",
"SophieMavrogeni",
"John RLaird",
"David WSobel",
"MartinMiner",
"AntonellaBalestrieri",
"Petros PSfikakis",
"GeorgeTsoulfas",
"Durga PrasannaMisra",
"VikasAgarwal",
"George DKitas",
"Jagjit STeji",
"MustafaAl-Maini",
"Surinder KDhanjil",
"AndrewNicolaides",
"AdityaSharma",
"VijayRathore",
"MostafaFatemi",
"AzraAlizad",
"Pudukode RKrishnan",
"Rajanikant RYadav",
"FrenceNagy",
"Zsigmond TamásKincses",
"ZoltanRuzsa",
"SubbaramNaidu",
"KlaudijaViskovic",
"Manudeep KKalra",
"Jasjit SSuri"
] |
10.1016/j.compbiomed.2022.105571
10.1002/jmv.25996
10.1002/jmv.25855
10.23736/S0392-9590.21.04771-4
|
Disease Localization and Severity Assessment in Chest X-Ray Images using Multi-Stage Superpixels Classification.
|
Chest X-ray (CXR) is a non-invasive imaging modality used in the prognosis and management of chronic lung disorders like tuberculosis (TB), pneumonia, coronavirus disease (COVID-19), etc. The radiomic features associated with different disease manifestations assist in detection, localization, and grading the severity of infected lung regions. The majority of the existing computer-aided diagnosis (CAD) system used these features for the classification task, and only a few works have been dedicated to disease-localization and severity scoring. Moreover, the existing deep learning approaches use class activation map and Saliency map, which generate a rough localization. This study aims to generate a compact disease boundary, infection map, and grade the infection severity using proposed multistage superpixel classification-based disease localization and severity assessment framework.
The proposed method uses a simple linear iterative clustering (SLIC) technique to subdivide the lung field into small superpixels. Initially, the different radiomic texture and proposed shape features are extracted and combined to train different benchmark classifiers in a multistage framework. Subsequently, the predicted class labels are used to generate an infection map, mark disease boundary, and grade the infection severity. The performance is evaluated using a publicly available Montgomery dataset and validated using Friedman average ranking and Holm and Nemenyi post-hoc procedures.
The proposed multistage classification approach achieved accuracy (ACC)= 95.52%, F-Measure (FM)= 95.48%, area under the curve (AUC)= 0.955 for Stage-I and ACC=85.35%, FM=85.20%, AUC=0.853 for Stage-II using calibration dataset and ACC = 93.41%, FM = 95.32%, AUC = 0.936 for Stage-I and ACC = 84.02%, FM = 71.01%, AUC = 0.795 for Stage-II using validation dataset. Also, the model has demonstrated the average Jaccard Index (JI) of 0.82 and Pearson's correlation coefficient (r) of 0.9589.
The obtained classification results using calibration and validation dataset confirms the promising performance of the proposed framework. Also, the average JI shows promising potential to localize the disease, and better agreement between radiologist score and predicted severity score (r) confirms the robustness of the method. Finally, the statistical test justified the significance of the obtained results.
|
Computer methods and programs in biomedicine
| 2022-06-25T00:00:00
|
[
"Tej BahadurChandra",
"Bikesh KumarSingh",
"DeepakJain"
] |
10.1016/j.cmpb.2022.106947
10.1016/j.media.2020.101847
10.1016/j.media.2020.101846
10.1016/j.eswa.2020.113514
10.1016/j.measurement.2019.05.076
10.1016/j.eswa.2020.113909
10.1016/j.media.2021.102046
10.1016/j.patcog.2020.107613
10.1016/j.media.2018.12.007
10.1007/s10916-019-1222-8
10.1109/ICPC2T48082.2020.9071445
10.1109/tmi.2020.2993291
10.1016/j.media.2021.101978
10.1016/j.media.2020.101911
10.1016/j.media.2015.09.004
10.1016/j.media.2020.101913
10.1109/ACCESS.2020.2971257
10.1109/ACCESS.2020.3003810
10.1109/ICCV.2017.74
10.1007/s13755-021-00146-8
10.1109/TPAMI.2012.120
10.3990/2.401
10.3390/s17071474
10.1186/s12880-019-0369-6
10.1016/j.media.2018.05.006
10.1109/CVPR.2017.369
10.1109/CVPR.2018.00865
10.1007/s10723-021-09596-6
10.1049/ipr2.12153
10.1109/LSC.2018.8572113
10.1109/JBHI.2021.3069169
10.1007/s13755-021-00146-8
10.3390/diagnostics12010025
10.1016/j.compbiomed.2021.105002
10.1016/j.media.2021.102299
10.1259/bjr.20210759
10.1016/j.media.2020.101860
10.1016/j.patcog.2021.107828
10.1016/j.media.2021.102054
10.1016/j.compbiomed.2022.105466
10.1016/j.scs.2021.103252
10.1007/s13369-021-05958-0
10.5114/pjr.2022.113435
10.3390/diagnostics11040616
10.1088/1742-6596/1767/1/012004
10.1109/TMI.2020.3040950
10.1109/JBHI.2020.3037127
10.1016/j.media.2020.101794
10.1038/s41598-019-42557-4
10.1109/TMI.2013.2290491
10.1109/TMI.2013.2284099
10.1016/j.measurement.2019.107426
10.1007/s00330-020-07504-2
10.1016/j.ijid.2020.05.021
10.1016/j.ijmedinf.2019.06.017
10.1109/TSMC.1973.4309314
10.1016/j.media.2020.101819
10.1016/j.advengsoft.2013.12.007
10.1007/s10462-009-9124-7
10.1016/B978-0-12-809633-8.20473-1
10.11613/BM.2014.003
10.1109/TMI.2017.2775636
10.5152/dir.2020.20205
|
Lung Sonography in Critical Care Medicine.
|
During the last five decades, lung sonography has developed into a core competency of intensive care medicine. It is a highly accurate bedside tool, with clear diagnostic criteria for most causes of respiratory failure (pneumothorax, pulmonary edema, pneumonia, pulmonary embolism, chronic obstructive pulmonary disease, asthma, and pleural effusion). It helps in distinguishing a hypovolemic from a cardiogenic, obstructive, or distributive shock. In addition to diagnostics, it can also be used to guide ventilator settings, fluid administration, and even antimicrobial therapy, as well as to assess diaphragmatic function. Moreover, it provides risk-reducing guidance during invasive procedures, e.g., intubation, thoracocentesis, or percutaneous dilatational tracheostomy. The recent pandemic has further increased its scope of clinical applications in the management of COVID-19 patients, from their initial presentation at the emergency department, during their hospitalization, and after their discharge into the community. Despite its increasing use, a consensus on education, assessment of competencies, and certification is still missing. Deep learning and artificial intelligence are constantly developing in medical imaging, and contrast-enhanced ultrasound enables new diagnostic perspectives. This review summarizes the clinical aspects of lung sonography in intensive care medicine and provides an overview about current training modalities, diagnostic limitations, and future developments.
|
Diagnostics (Basel, Switzerland)
| 2022-06-25T00:00:00
|
[
"RobertBreitkopf",
"BenediktTreml",
"SasaRajsic"
] |
10.3390/diagnostics12061405
10.1007/s00134-015-3952-5
10.1186/s13089-017-0059-y
10.1016/S2213-2600(14)70135-3
10.1378/chest.14-2608
10.1097/MD.0000000000005713
10.1186/cc13016
10.1590/S1516-31802010000200009
10.1097/CCM.0000000000003129
10.1097/CCM.0b013e31824e68ae
10.3760/CMA.J.ISSN.2095-4352.2015.07.008
10.1007/s00134-016-4411-7
10.1186/cc13859
10.1164/rccm.201003-0369OC
10.1056/NEJMra072149
10.7754/Clin.Lab.2017.170730
10.1213/ANE.0b013e3181e7cc42
10.1002/uog.22034
10.1016/j.ajog.2020.04.020
10.21106/ijma.294
10.1016/j.aca.2020.10.009
10.1016/j.ultrasmedbio.2020.04.033
10.3390/diagnostics11122202
10.1016/j.crad.2020.05.001
10.1007/s00134-021-06506-y
10.1378/chest.13-0882
10.1056/NEJMra0909487
10.1097/01.CCM.0000260624.99430.22
10.1378/chest.07-2800
10.1002/jum.15285
10.1016/j.acra.2020.07.002
10.1164/rccm.201802-0227LE
10.15557/JoU.2021.0036
10.1093/BJACEACCP/MKV012
10.1186/2110-5820-4-1
10.4103/0970-2113.156245
10.1097/00005373-200204000-00029
10.4103/1658-354X.197351
10.1164/ajrccm.156.5.96-07096
10.1007/s00134-012-2513-4
10.1007/s00134-003-1930-9
10.1007/s00134-010-2079-y
10.1016/j.ajem.2016.07.032
10.5811/westjem.2015.3.24809
10.1007/s001340000627
10.1016/j.redar.2020.02.008
10.1378/chest.108.5.1345
10.1197/j.aem.2005.05.005
10.1097/01.TA.0000133565.88871.E4
10.1007/s00134-014-3402-9
10.1378/chest.12-1445
10.1097/01.CCM.0000164542.86954.B4
10.1097/00005373-200102000-00003
10.1007/978-3-319-44072-9_4
10.1186/1472-6920-9-3
10.1111/j.1553-2712.2008.00347.x
10.1186/1476-7120-4-34
10.3389/fphys.2022.838479
10.1378/chest.09-0001
10.1186/1476-7120-6-16
10.7863/jum.2003.22.2.173
10.1378/chest.10-0435
10.1007/s001340050771
10.1159/000074195
10.1186/s12890-015-0091-2
10.1097/CCM.0b013e3181b08cdb
10.1007/s00134-017-4941-7
10.3390/JCM11051224
10.1097/CCM.0000000000003340
10.1186/cc5668
10.1097/CCM.0000000000003377
10.1007/978-3-642-37096-0_22
10.1378/chest.128.3.1531
10.1016/0301-5629(95)02003-9
10.1378/chest.13-1087
10.1378/chest.08-2281
10.1136/emj.2010.101584
10.1378/chest.12-0364
10.1016/j.chest.2015.12.012
10.3390/jcm10112453
10.2214/ajr.159.4.1529829
10.1007/s001340050988
10.1007/s00134-005-0024-2
10.1097/01.CCM.0000171532.02639.08
10.1378/chest.127.1.224
10.1007/s40477-017-0266-1
10.1148/radiology.191.3.8184046
10.1097/00000542-200401000-00006
10.2214/ajr.159.1.1609716
10.1097/00063198-200307000-00007
10.1002/jcu.1870190206
10.1136/thx.2008.100545
10.1378/chest.126.1.129
10.1148/rg.322115127
10.1016/j.ultrasmedbio.2010.10.004
10.1111/j.1440-1843.2011.02005.x
10.1097/CCM.0b013e3182266408
10.1177/0885066615583639
10.1186/2036-7902-6-8
10.1007/s00134-012-2547-7
10.1186/s13054-015-0894-9
10.1164/rccm.201004-0670OC
10.1164/rccm.201602-0367OC
10.1016/S0009-9260(05)82987-3
10.1007/s00134-015-4125-2
10.1136/thoraxjnl-2013-204111
10.1016/S0012-3692(15)32912-3
10.1097/00003246-199612000-00020
10.1007/s12630-014-0301-z
10.1016/j.annemergmed.2006.07.004
10.1197/j.aem.2005.08.014
10.1017/S1049023X00002004
10.1002/ppul.25955
10.1378/chest.125.3.1059
10.1186/1477-7819-12-139
10.1001/archinternmed.2009.548
10.1378/chest.12-0447
10.1378/chest.123.2.436
10.5402/2012/676524
10.1378/chest.11-0348
10.1177/0885066618755334
10.1007/s00408-019-00230-7
10.1002/jum.14448
10.1007/s12028-009-9259-z
10.1186/cc10344
10.2139/ssrn.3544750
10.26355/EURREV_202003_20549
10.1148/radiol.2020200847
10.1186/s13089-020-00171-w
10.4269/ajtmh.20-0280
10.1002/emp2.12194
10.1186/S13089-021-00250-6
10.1378/chest.08-2305
10.1007/S00134-011-2246-9
10.1016/j.chest.2019.08.859
10.1016/j.chest.2019.08.806
10.1186/s13089-018-0103-6
10.1186/cc10194
10.1007/s00134-009-1531-3
10.1186/s13089-017-0081-0
10.2214/ajr.168.2.9016251
10.2214/ajr.164.6.7754907
10.1109/TUFFC.2021.3094849
10.1109/TMI.2020.2994459
10.1109/JBHI.2019.2936151
10.1109/TUFFC.2020.3002249
10.1016/S0140-6736(20)31875-4
10.1055/A-0586-1107
10.1016/j.jus.2008.05.008
10.4329/wjr.v5.i10.372
10.1002/jum.15338
10.1007/s00134-020-06085-4
|
Novel COVID-19 Diagnosis Delivery App Using Computed Tomography Images Analyzed with Saliency-Preprocessing and Deep Learning.
|
This app project was aimed to remotely deliver diagnoses and disease-progression information to COVID-19 patients to help minimize risk during this and future pandemics. Data collected from chest computed tomography (CT) scans of COVID-19-infected patients were shared through the app. In this article, we focused on image preprocessing techniques to identify and highlight areas with ground glass opacity (GGO) and pulmonary infiltrates (PIs) in CT image sequences of COVID-19 cases. Convolutional neural networks (CNNs) were used to classify the disease progression of pneumonia. Each GGO and PI pattern was highlighted with saliency map fusion, and the resulting map was used to train and test a CNN classification scheme with three classes. In addition to patients, this information was shared between the respiratory triage/radiologist and the COVID-19 multidisciplinary teams with the application so that the severity of the disease could be understood through CT and medical diagnosis. The three-class, disease-level COVID-19 classification results exhibited a macro-precision of more than 94.89% in a two-fold cross-validation. Both the segmentation and classification results were comparable to those made by a medical specialist.
|
Tomography (Ann Arbor, Mich.)
| 2022-06-24T00:00:00
|
[
"SantiagoTello-Mijares",
"FomuyWoo"
] |
10.3390/tomography8030134
10.1016/j.radi.2020.05.012
10.1148/radiol.2020201160
10.1007/s00330-020-06955-x
10.1148/radiol.2020200343
10.2214/AJR.20.22976
10.1016/j.diii.2020.03.014
10.1016/j.ejrad.2020.108941
10.5152/dir.2020.20144
10.1148/radiol.2020200230
10.1007/s00330-020-06976-6
10.1056/NEJMoa2002032
10.1111/jgh.15094
10.3390/diagnostics12040846
10.1016/j.ejrad.2020.109008
10.1016/j.jrid.2020.04.001
10.1109/TNNLS.2016.2582924
10.1021/acs.jcim.8b00706
10.1109/ACCESS.2020.3009328
10.48550/arXiv.2004.07407
10.1016/j.compbiomed.2020.104037
10.1016/j.patrec.2020.10.001
10.1148/radiol.2020200905
10.1109/ACCESS.2021.3058854
10.1109/TIP.2021.3058783
10.1007/BF00133570
10.1109/TIP.2008.925375
10.1155/2021/8869372
10.1007/s10140-021-01937-y
10.1109/TPAMI.1986.4767851
10.1117/1.JBO.23.5.056005
10.1148/ryct.2020200213
10.1097/RLI.0000000000000670
10.1148/radiol.2020200843
10.1148/radiol.2020201473
10.1109/5.726791
10.1145/3065386
10.1109/ic-ETITE47903.2020.049
10.1109/TEVC.2021.3088631
10.1136/bmj.m998
10.2196/18810
10.1109/42.363096
|
Convolutional neural network based CT scan classification method for COVID-19 test validation.
|
Given the novel corona virus discovered in Wuhan, China, in December 2019, due to the high false-negative rate of RT-PCR and the time-consuming to obtain the results, research has proved that computed tomography (CT) has become an auxiliary One of the essential means of diagnosis and treatment of new corona virus pneumonia. Since few COVID-19 CT datasets are currently available, it is proposed to use conditional generative adversarial networks to enhance data to obtain CT datasets with more samples to reduce the risk of over fitting. In addition, a BIN residual block-based method is proposed. The improved U-Net network is used for image segmentation and then combined with multi-layer perception for classification prediction. By comparing with network models such as AlexNet and GoogleNet, it is concluded that the proposed BUF-Net network model has the best performance, reaching an accuracy rate of 93%. Using Grad-CAM technology to visualize the system's output can more intuitively illustrate the critical role of CT images in diagnosing COVID-19. Applying deep learning using the proposed techniques suggested by the above study in medical imaging can help radiologists achieve more effective diagnoses that is the main objective of the research. On the basis of the foregoing, this study proposes to employ CGAN technology to augment the restricted data set, integrate the residual block into the U-Net network, and combine multi-layer perception in order to construct new network architecture for COVID-19 detection using CT images. -19. Given the scarcity of COVID-19 CT datasets, it is proposed that conditional generative adversarial networks be used to augment data in order to obtain CT datasets with more samples and therefore lower the danger of overfitting.
|
Smart health (Amsterdam, Netherlands)
| 2022-06-21T00:00:00
|
[
"MukeshSoni",
"Ajay KumarSingh",
"K SureshBabu",
"SumitKumar",
"AkhileshKumar",
"ShwetaSingh"
] |
10.1016/j.smhl.2022.100296
10.2174/1573405617666210215143503
10.1108/WJE-12-2020-0631
10.1109/ICAS49788.2021.9551169
10.1108/WJE-09-2020-0450
10.1109/JBHI.2021.3060035
10.1109/ASYU52992.2021.9598993
10.1109/ACCESS.2020.3005510
10.1109/JBHI.2020.3042523
10.1155/2021/9293877
10.1109/JBHI.2021.3051470
10.1109/ESCI50559.2021.9396773
10.1109/ISITIA52817.2021.9502217
10.1109/ICOIACT50329.2020.9332123
10.4018/IJSIR.287544
10.1109/TMI.2020.2994908
10.1109/CAC51589.2020.9326470
10.1109/TMI.2021.3104474
10.1109/TBDATA.2021.3056564
10.1109/JBHI.2021.3067465
|
Improved Analysis of COVID-19 Influenced Pneumonia from the Chest X-Rays Using Fine-Tuned Residual Networks.
|
COVID-19 has remained a threat to world life despite a recent reduction in cases. There is still a possibility that the virus will evolve and become more contagious. If such a situation occurs, the resulting calamity will be worse than in the past if we act irresponsibly. COVID-19 must be widely screened and recognized early to avert a global epidemic. Positive individuals should be quarantined immediately, as this is the only effective way to prevent a global tragedy that has occurred previously. No positive case should go unrecognized. However, current COVID-19 detection procedures require a significant amount of time during human examination based on genetic and imaging techniques. Apart from RT-PCR and antigen-based tests, CXR and CT imaging techniques aid in the rapid and cost-effective identification of COVID. However, discriminating between diseased and normal X-rays is a time-consuming and challenging task requiring an expert's skill. In such a case, the only solution was an automatic diagnosis strategy for identifying COVID-19 instances from chest X-ray images. This article utilized a deep convolutional neural network, ResNet, which has been demonstrated to be the most effective for image classification. The present model is trained using pretrained ResNet on ImageNet weights. The versions of ResNet34, ResNet50, and ResNet101 were implemented and validated against the dataset. With a more extensive network, the accuracy appeared to improve. Nonetheless, our objective was to balance accuracy and training time on a larger dataset. By comparing the prediction outcomes of the three models, we concluded that ResNet34 is a more likely candidate for COVID-19 detection from chest X-rays. The highest accuracy level reached 98.34%, which was higher than the accuracy achieved by other state-of-the-art approaches examined in earlier studies. Subsequent analysis indicated that the incorrect predictions occurred with approximately 100% certainty. This uncovered a severe weakness in CNN, particularly in the medical area, where critical decisions are made. However, this can be addressed further in a future study by developing a modified model to incorporate uncertainty into the predictions, allowing medical personnel to manually review the incorrect predictions.
|
Computational intelligence and neuroscience
| 2022-06-21T00:00:00
|
[
"AmelKsibi",
"MohammedZakariah",
"ManelAyadi",
"HelaElmannai",
"Prashant KumarShukla",
"HalifaAwal",
"MoniaHamdi"
] |
10.1155/2022/9414567
10.1016/s0140-6736(20)30183-5
10.1001/jama.2020.3786
10.2807/1560-7917.ES.2020.25.3.2000045
10.3201/eid2606.200301
10.1038/s42256-021-00338-7
10.5114/pjr.2020.100788
10.1148/radiol.2020200432
10.1148/radiol.2020200343
10.1016/j.media.2020.101794
10.1146/annurev.bioeng.8.061505.095802
10.1093/bib/bbx044
10.1097/rli.0000000000000672
10.1016/j.bbe.2020.08.008
10.1016/j.media.2017.07.005
10.1148/radiol.2019192515
10.1109/access.2020.3016780
10.1109/tii.2021.3057524
10.1371/journal.pone.0255886
10.1109/cvpr.2017.369
10.1007/s13246-020-00865-4
10.1016/j.cell.2018.02.010
10.1016/j.compbiomed.2020.103792
10.1109/access.2020.3010287
10.1016/j.cmpb.2020.105581
10.1007/s12559-021-09848-3
10.1109/access.2020.2994762
10.1016/j.compbiomed.2021.104319
10.1038/s41598-020-76550-z
10.1155/2021/8828404
10.1109/iccc51575.2020.9344870
10.1155/2021/3281135
10.1117/1.jmi.8.s1.017503
10.1109/aset.2018.8379889
10.1016/j.knosys.2015.01.010
10.1007/s11263-015-0816-y
10.1109/CVPR.2016.90
10.1016/j.patcog.2019.01.006
10.1371/journal.pone.0118432
10.1007/978-981-15-5281-6_7
10.1016/j.mehy.2020.109761
10.1101/2020.07.08.20149161
10.1007/s10044-021-00970-4
10.1155/2021/6799202
10.1109/cvpr.2015.7298640
|
Combating COVID-19 Using Generative Adversarial Networks and Artificial Intelligence for Medical Images: Scoping Review.
|
Research on the diagnosis of COVID-19 using lung images is limited by the scarcity of imaging data. Generative adversarial networks (GANs) are popular for synthesis and data augmentation. GANs have been explored for data augmentation to enhance the performance of artificial intelligence (AI) methods for the diagnosis of COVID-19 within lung computed tomography (CT) and X-ray images. However, the role of GANs in overcoming data scarcity for COVID-19 is not well understood.
This review presents a comprehensive study on the role of GANs in addressing the challenges related to COVID-19 data scarcity and diagnosis. It is the first review that summarizes different GAN methods and lung imaging data sets for COVID-19. It attempts to answer the questions related to applications of GANs, popular GAN architectures, frequently used image modalities, and the availability of source code.
A search was conducted on 5 databases, namely PubMed, IEEEXplore, Association for Computing Machinery (ACM) Digital Library, Scopus, and Google Scholar. The search was conducted from October 11-13, 2021. The search was conducted using intervention keywords, such as "generative adversarial networks" and "GANs," and application keywords, such as "COVID-19" and "coronavirus." The review was performed following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses Extension for Scoping Reviews (PRISMA-ScR) guidelines for systematic and scoping reviews. Only those studies were included that reported GAN-based methods for analyzing chest X-ray images, chest CT images, and chest ultrasound images. Any studies that used deep learning methods but did not use GANs were excluded. No restrictions were imposed on the country of publication, study design, or outcomes. Only those studies that were in English and were published from 2020 to 2022 were included. No studies before 2020 were included.
This review included 57 full-text studies that reported the use of GANs for different applications in COVID-19 lung imaging data. Most of the studies (n=42, 74%) used GANs for data augmentation to enhance the performance of AI techniques for COVID-19 diagnosis. Other popular applications of GANs were segmentation of lungs and superresolution of lung images. The cycleGAN and the conditional GAN were the most commonly used architectures, used in 9 studies each. In addition, 29 (51%) studies used chest X-ray images, while 21 (37%) studies used CT images for the training of GANs. For the majority of the studies (n=47, 82%), the experiments were conducted and results were reported using publicly available data. A secondary evaluation of the results by radiologists/clinicians was reported by only 2 (4%) studies.
Studies have shown that GANs have great potential to address the data scarcity challenge for lung images in COVID-19. Data synthesized with GANs have been helpful to improve the training of the convolutional neural network (CNN) models trained for the diagnosis of COVID-19. In addition, GANs have also contributed to enhancing the CNNs' performance through the superresolution of the images and segmentation. This review also identified key limitations of the potential transformation of GAN-based methods in clinical applications.
|
JMIR medical informatics
| 2022-06-17T00:00:00
|
[
"HazratAli",
"ZubairShah"
] |
10.2196/37365
10.1016/S1473-3099(20)30235-8
10.1016/S1473-3099(20)30235-8
10.1002/jmv.25786
10.2196/20756
10.3389/fmed.2021.704256
10.3389/fmed.2021.704256
10.1109/access.2020.3010287
10.1152/physiolgenomics.00029.2020
10.1109/tai.2020.3020521
10.1109/access.2020.3023495
10.1007/s10916-018-1072-9
10.1016/j.media.2019.101552
10.1109/iccv.2017.244
10.3389/fpubh.2020.00164
10.3389/fpubh.2020.00164
10.1002/acm2.13121
10.2196/27414
10.7326/m18-0850
10.1109/jbhi.2020.3042523
10.1007/s00259-020-04929-1
10.1038/s41598-021-87994-2
10.1038/s41598-021-87994-2
10.1007/s13246-020-00952-6
10.1016/j.media.2021.102159
10.1007/s12559-020-09785-7
10.1007/s00521-020-05437-x
10.1016/j.compbiomed.2020.104181
10.1002/mp.15044
10.1007/s10796-021-10144-6
10.1007/s12539-020-00403-6
10.1016/j.eswa.2021.115681
10.2147/idr.s296346
10.1007/s00521-020-05636-6
10.3390/diagnostics11050895
10.1007/s12559-021-09926-6
10.1016/j.bspc.2021.102901
10.1007/s42979-021-00795-2
10.1016/j.csbj.2021.02.016
10.1016/j.bspc.2021.103182
10.1109/conit51480.2021.9498272
10.1109/isbi48211.2021.9434159
10.1109/access.2020.2994762
10.1109/bibm49941.2020.9313466
10.1109/icassp39728.2021.9414031
10.1109/cec45853.2021.9504743
10.1109/prai53619.2021.9551043
10.1109/bigdata50022.2020.9377878
10.1109/pic50277.2020.9350813
10.1109/tem.2021.3103334
10.1109/jbhi.2021.3067465
10.1109/csci51800.2020.00160
10.1109/access.2020.3025010
10.1109/isbi48211.2021.9433806
10.3390/sym12040651
10.1007/s13278-021-00731-5
10.3390/engproc2021007006
10.1109/access.2020.3017915
10.1155/2021/6680455
10.32604/csse.2021.017191
10.3390/app11167174
10.1145/3458744.3474039
10.1145/3449639.3459319
10.1007/s00521-021-06344-5
10.1016/j.aej.2021.01.011
10.1016/j.eswa.2021.115401
10.1007/978-3-030-60802-6_36
10.3390/sym12091530
10.32604/cmc.2022.018547
10.32604/cmc.2022.018564
10.1007/s10489-020-01867-1
10.1007/978-3-030-86340-1_47
10.1007/978-3-030-68035-0_12
10.1117/12.2582162
|
Automated Multi-View Multi-Modal Assessment of COVID-19 Patients Using Reciprocal Attention and Biomedical Transform.
|
Automated severity assessment of coronavirus disease 2019 (COVID-19) patients can help rationally allocate medical resources and improve patients' survival rates. The existing methods conduct severity assessment tasks mainly on a unitary modal and single view, which is appropriate to exclude potential interactive information. To tackle the problem, in this paper, we propose a multi-view multi-modal model to automatically assess the severity of COVID-19 patients based on deep learning. The proposed model receives multi-view ultrasound images and biomedical indices of patients and generates comprehensive features for assessment tasks. Also, we propose a reciprocal attention module to acquire the underlying interactions between multi-view ultrasound data. Moreover, we propose biomedical transform module to integrate biomedical data with ultrasound data to produce multi-modal features. The proposed model is trained and tested on compound datasets, and it yields 92.75% for accuracy and 80.95% for recall, which is the best performance compared to other state-of-the-art methods. Further ablation experiments and discussions conformably indicate the feasibility and advancement of the proposed model.
|
Frontiers in public health
| 2022-06-14T00:00:00
|
[
"YanhanLi",
"HongyunZhao",
"TianGan",
"YangLiu",
"LianZou",
"TingXu",
"XuanChen",
"CienFan",
"MengWu"
] |
10.3389/fpubh.2022.886958
10.1056/NEJMoa2001017
10.1056/NEJMoa2001191
10.1148/radiol.2020200642
10.1148/ryct.2020200034
10.1016/j.compbiomed.2021.104721
10.1016/j.advms.2020.06.005
10.1007/s10439-015-1495-0
10.1016/S2213-2600(20)30120-X
10.1007/978-3-030-32245-8_64
10.1016/j.compmedimag.2019.101688
10.1007/s00330-019-06163-2
10.1001/jama.2016.17216
10.1001/jama.2017.18152
10.1016/j.cell.2018.02.010
10.1148/radiol.2020200432
10.1038/s42256-020-0180-7
10.1016/j.cell.2020.05.032
10.1038/s41467-020-17971-2
10.1038/s41467-020-17280-8
10.1038/s41598-020-76550-z
10.1038/s41598-020-76282-0
10.3390/diagnostics12010025
10.1016/j.compbiomed.2020.104037
10.1016/j.media.2021.102299
10.1016/j.bspc.2021.102622
10.1016/j.rinp.2021.104495
10.1038/s41598-022-05052-x
10.1145/3462635
10.1038/s41598-021-93543-8
10.1016/j.bspc.2021.103182
10.1002/jum.15285
10.48550/arXiv.1412.6980
10.1109/ICCV.2017.324
10.48550/arXiv.1409.1556
10.1109/CVPR.2016.90
10.1109/CVPR.2017.243
10.1109/CVPR.2018.00745
10.1109/CVPR.2017.195
10.1109/ICCV.2017.74
|
A Deep Learning Model for Diagnosing COVID-19 and Pneumonia through X-ray.
|
The new global pandemic caused by the 2019 novel coronavirus (COVID-19), novel coronavirus pneumonia, has spread rapidly around the world, causing enormous damage to daily life, public health security, and the global economy. Early detection and treatment of COVID-19 infected patients are critical to prevent the further spread of the epidemic. However, existing detection methods are unable to rapidly detect COVID-19 patients, so infected individuals are not detected in a timely manner, which complicates the prevention and control of COVID-19 to some extent. Therefore, it is crucial to develop a rapid and practical COVID-19 detection method. In this work, we explored the application of deep learning in COVID-19 detection to develop a rapid COVID-19 detection method.
Existing studies have shown that novel coronavirus pneumonia has significant radiographic performance. In this study, we analyze and select the features of chest radiographs. We propose a chest X-Ray (CXR) classification method based on the selected features and investigate the application of transfer learning in detecting pneumonia and COVID-19. Furthermore, we combine the proposed CXR classification method based on selected features with transfer learning and ensemble learning and propose an ensemble deep learning model based on transfer learning called COVID-ensemble to diagnose pneumonia and COVID-19 using chest x-ray images. The model aims to provide an accurate diagnosis for binary classification (no finding/pneumonia) and multivariate classification (COVID-19/No findings/ Pneumonia).
Our proposed CXR classification method based on selection features can significantly improve the CXR classification accuracy of the CNN model. Using this method, DarkNet19 improved its binary and triple classification accuracies by 3.5% and 5.78%, respectively. In addition, the COVIDensemble achieved 91.5% accuracy in the binary classification task and 91.11% in the multi-category classification task. The experimental results demonstrate that the COVID-ensemble can quickly and accurately detect COVID-19 and pneumonia automatically through X-ray images and that the performance of this model is superior to that of several existing methods.
Our proposed COVID-ensemble can not only overcome the limitations of the conventional COVID-19 detection method RT-PCR and provide convenient and fast COVID-19 detection but also automatically detect pneumonia, thereby reducing the pressure on the medical staff. Using deep learning models to automatically diagnose COVID-19 and pneumonia from X-ray images can serve as a fast and efficient screening method for COVID-19 and pneumonia.
|
Current medical imaging
| 2022-06-14T00:00:00
|
[
"XiangbinLiu",
"WenqianWu",
"JerryChun-Wei Lin",
"ShuaiLiu"
] |
10.2174/1573405618666220610093740
|
The Pitfalls of Using Open Data to Develop Deep Learning Solutions for COVID-19 Detection in Chest X-Rays.
|
Since the emergence of COVID-19, deep learning models have been developed to identify COVID-19 from chest X-rays. With little to no direct access to hospital data, the AI community relies heavily on public data comprising numerous data sources. Model performance results have been exceptional when training and testing on open-source data, surpassing the reported capabilities of AI in pneumonia-detection prior to the COVID-19 outbreak. In this study impactful models are trained on a widely used open-source data and tested on an external test set and a hospital dataset, for the task of classifying chest X-rays into one of three classes: COVID-19, non-COVID pneumonia and no-pneumonia. Classification performance of the models investigated is evaluated through ROC curves, confusion matrices and standard classification metrics. Explainability modules are implemented to explore the image features most important to classification. Data analysis and model evalutions show that the popular open-source dataset COVIDx is not representative of the real clinical problem and that results from testing on this are inflated. Dependence on open-source data can leave models vulnerable to bias and confounding variables, requiring careful analysis to develop clinically useful/viable AI tools for COVID-19 detection in chest X-rays.
|
Studies in health technology and informatics
| 2022-06-09T00:00:00
|
[
"RachaelHarkness",
"GeoffHall",
"Alejandro FFrangi",
"NishantRavikumar",
"KieranZucker"
] |
10.3233/SHTI220164
|
Evaluation of the models generated from clinical features and deep learning-based segmentations: Can thoracic CT on admission help us to predict hospitalized COVID-19 patients who will require intensive care?
|
The aim of the study was to predict the probability of intensive care unit (ICU) care for inpatient COVID-19 cases using clinical and artificial intelligence segmentation-based volumetric and CT-radiomics parameters on admission.
Twenty-eight clinical/laboratory features, 21 volumetric parameters, and 74 radiomics parameters obtained by deep learning (DL)-based segmentations from CT examinations of 191 severe COVID-19 inpatients admitted between March 2020 and March 2021 were collected. Patients were divided into Group 1 (117 patients discharged from the inpatient service) and Group 2 (74 patients transferred to the ICU), and the differences between the groups were evaluated with the T-test and Mann-Whitney test. The sensitivities and specificities of significantly different parameters were evaluated by ROC analysis. Subsequently, 152 (79.5%) patients were assigned to the training/cross-validation set, and 39 (20.5%) patients were assigned to the test set. Clinical, radiological, and combined logit-fit models were generated by using the Bayesian information criterion from the training set and optimized via tenfold cross-validation. To simultaneously use all of the clinical, volumetric, and radiomics parameters, a random forest model was produced, and this model was trained by using a balanced training set created by adding synthetic data to the existing training/cross-validation set. The results of the models in predicting ICU patients were evaluated with the test set.
No parameter individually created a reliable classifier. When the test set was evaluated with the final models, the AUC values were 0.736, 0.708, and 0.794, the specificity values were 79.17%, 79.17%, and 87.50%, the sensitivity values were 66.67%, 60%, and 73.33%, and the F1 values were 0.67, 0.62, and 0.76 for the clinical, radiological, and combined logit-fit models, respectively. The random forest model that was trained with the balanced training/cross-validation set was the most successful model, achieving an AUC of 0.837, specificity of 87.50%, sensitivity of 80%, and F1 value of 0.80 in the test set.
By using a machine learning algorithm that was composed of clinical and DL-segmentation-based radiological parameters and that was trained with a balanced data set, COVID-19 patients who may require intensive care could be successfully predicted.
|
BMC medical imaging
| 2022-06-08T00:00:00
|
[
"MutluGülbay",
"AliyeBaştuğ",
"ErdemÖzkan",
"Büşra YüceÖztürk",
"Bökebatur Ahmet RaşitMendi",
"HürremBodur"
] |
10.1186/s12880-022-00833-2
10.1371/journal.pone.0245272
10.1111/joim.13091
10.4266/acc.2020.00381
10.1371/journal.pone.0243709
10.1093/cid/ciaa443
10.1001/jamainternmed.2020.2033
10.1093/cid/ciaa414
10.1186/s40560-021-00527-x
10.7150/ijms.48281
10.5114/pjr.2020.98009
10.7150/thno.46465
10.1148/radiol.2015151169
10.1038/s41598-021-83237-6
10.1371/journal.pone.0246582
10.1038/s41598-021-90991-0
10.7150/thno.46428
10.7150/ijbs.58855
10.1186/s12880-020-00529-5
10.1016/s0895-4356(96)00236-3
10.1186/1471-2105-14-106
10.2307/2531595
10.1016/j.ijid.2021.12.357
10.1016/S2589-7500(21)00039-X
10.1371/journal.pone.0230548
10.4081/jphr.2021.2270
10.2214/AJR.20.24044
10.2214/AJR.20.22976
10.1148/ryct.2020200322
10.1007/s11604-020-00956-y
10.1016/j.ijid.2020.10.095
10.5152/dir.2020.20451
10.1016/j.ejrad.2021.109552
10.1007/s11547-020-01197-9
10.1148/radiol.2021204522
|
Deep learning-based lesion subtyping and prediction of clinical outcomes in COVID-19 pneumonia using chest CT.
|
The main objective of this work is to develop and evaluate an artificial intelligence system based on deep learning capable of automatically identifying, quantifying, and characterizing COVID-19 pneumonia patterns in order to assess disease severity and predict clinical outcomes, and to compare the prediction performance with respect to human reader severity assessment and whole lung radiomics. We propose a deep learning based scheme to automatically segment the different lesion subtypes in nonenhanced CT scans. The automatic lesion quantification was used to predict clinical outcomes. The proposed technique has been independently tested in a multicentric cohort of 103 patients, retrospectively collected between March and July of 2020. Segmentation of lesion subtypes was evaluated using both overlapping (Dice) and distance-based (Hausdorff and average surface) metrics, while the proposed system to predict clinically relevant outcomes was assessed using the area under the curve (AUC). Additionally, other metrics including sensitivity, specificity, positive predictive value and negative predictive value were estimated. 95% confidence intervals were properly calculated. The agreement between the automatic estimate of parenchymal damage (%) and the radiologists' severity scoring was strong, with a Spearman correlation coefficient (R) of 0.83. The automatic quantification of lesion subtypes was able to predict patient mortality, admission to the Intensive Care Units (ICU) and need for mechanical ventilation with an AUC of 0.87, 0.73 and 0.68 respectively. The proposed artificial intelligence system enabled a better prediction of those clinically relevant outcomes when compared to the radiologists' interpretation and to whole lung radiomics. In conclusion, deep learning lesion subtyping in COVID-19 pneumonia from noncontrast chest CT enables quantitative assessment of disease severity and better prediction of clinical outcomes with respect to whole lung radiomics or radiologists' severity score.
|
Scientific reports
| 2022-06-08T00:00:00
|
[
"DavidBermejo-Peláez",
"RaúlSan José Estépar",
"MaríaFernández-Velilla",
"CarmeloPalacios Miras",
"GuillermoGallardo Madueño",
"MarianaBenegas",
"CarolinaGotera Rivera",
"SandraCuerpo",
"MiguelLuengo-Oroz",
"JacoboSellarés",
"MarceloSánchez",
"GorkaBastarrika",
"GermanPeces Barba",
"Luis MSeijo",
"María JLedesma-Carbayo"
] |
10.1038/s41598-022-13298-8
10.1001/jama.2020.1585
10.1016/S0140-6736(20)30566-3
10.1136/thoraxjnl-2020-216001
10.2214/AJR.20.22976
10.1148/radiol.2020201754
10.1148/ryai.2020200098
10.1016/j.media.2020.101860
10.1038/s41598-021-90991-0
10.1148/radiol.2020200905
10.1038/s41746-021-00446-z
10.1038/s41598-021-84561-7
10.1148/RADIOL.2020202439
10.1038/s41467-020-17971-2
10.1016/j.cell.2020.04.045
10.1038/s41598-019-56989-5
10.1038/s41746-020-00369-1
10.1148/radiol.2020200370
10.1148/rg.2020200159
10.21037/atm-20-3026
10.1148/ryct.2020200322
10.1016/j.ejrad.2021.109583
10.1007/s00330-021-07957-z
10.1186/s41747-020-00173-2
10.1148/ryct.2020200110
10.1016/j.acra.2011.01.011
10.1109/TMI.2016.2535865
10.1016/j.acra.2015.12.021
10.1038/s41592-020-01008-z
10.21227/w3aw-rv39
|
Towards an effective model for lung disease classification: Using Dense Capsule Nets for early classification of lung diseases.
|
Machine Learning and computer vision have been the frontiers of the war against the COVID-19 Pandemic. Radiology has vastly improved the diagnosis of diseases, especially lung diseases, through the early assessment of key disease factors. Chest X-rays have thus become among the commonly used radiological tests to detect and diagnose many lung diseases. However, the discovery of lung disease through X-rays is a significantly challenging task depending on the availability of skilled radiologists. There has been a recent increase in attention to the design of Convolution Neural Networks (CNN) models for lung disease classification. A considerable amount of training dataset is required for CNN to work, but the problem is that it cannot handle translation and rotation correctly as input. The recently proposed Capsule Networks (referred to as CapsNets) are new automated learning architecture that aims to overcome the shortcomings in CNN. CapsNets are vital for rotation and complex translation. They require much less training information, which applies to the processing of data sets from medical images, including radiological images of the chest X-rays. In this research, the adoption and integration of CapsNets into the problem of chest X-ray classification have been explored. The aim is to design a deep model using CapsNet that increases the accuracy of the classification problem involved. We have used convolution blocks that take input images and generate convolution layers used as input to capsule block. There are 12 capsule layers operated, and the output of each capsule is used as an input to the next convolution block. The process is repeated for all blocks. The experimental results show that the proposed architecture yields better results when compared with the existing CNN techniques by achieving a better area under the curve (AUC) average. Furthermore, DNet checks the best performance in the ChestXray-14 data set on traditional CNN, and it is validated that DNet performs better with a higher level of total depth.
|
Applied soft computing
| 2022-06-07T00:00:00
|
[
"FaizanKarim",
"Munam AliShah",
"Hasan AliKhattak",
"ZoobiaAmeer",
"UmarShoaib",
"Hafiz TayyabRauf",
"FadiAl-Turjman"
] |
10.1016/j.asoc.2022.109077
|
Prior-aware autoencoders for lung pathology segmentation.
|
Segmentation of lung pathology in Computed Tomography (CT) images is of great importance for lung disease screening. However, the presence of different types of lung pathologies with a wide range of heterogeneities in size, shape, location, and texture, on one side, and their visual similarity with respect to surrounding tissues, on the other side, make it challenging to perform reliable automatic lesion segmentation. To leverage segmentation performance, we propose a deep learning framework comprising a Normal Appearance Autoencoder (NAA) model to learn the distribution of healthy lung regions and reconstruct pathology-free images from the corresponding pathological inputs by replacing the pathological regions with the characteristics of healthy tissues. Detected regions that represent prior information regarding the shape and location of pathologies are then integrated into a segmentation network to guide the attention of the model into more meaningful delineations. The proposed pipeline was tested on three types of lung pathologies, including pulmonary nodules, Non-Small Cell Lung Cancer (NSCLC), and Covid-19 lesion on five comprehensive datasets. The results show the superiority of the proposed prior model, which outperformed the baseline segmentation models in all the cases with significant margins. On average, adding the prior model improved the Dice coefficient for the segmentation of lung nodules by 0.038, NSCLCs by 0.101, and Covid-19 lesions by 0.041. We conclude that the proposed NAA model produces reliable prior knowledge regarding the lung pathologies, and integrating such knowledge into a prior segmentation network leads to more accurate delineations.
|
Medical image analysis
| 2022-06-03T00:00:00
|
[
"MehdiAstaraki",
"ÖrjanSmedby",
"ChunliangWang"
] |
10.1016/j.media.2022.102491
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.