Please use this identifier to cite or link to this item: http://hdl.handle.net/1893/36756
Full metadata record
DC FieldValueLanguage
dc.contributor.authorAli, Hazraten_UK
dc.contributor.authorQureshi, Rizwanen_UK
dc.contributor.authorShah, Zubairen_UK
dc.date.accessioned2025-03-11T01:02:51Z-
dc.date.available2025-03-11T01:02:51Z-
dc.date.issued2023-11-17en_UK
dc.identifier.othere47445en_UK
dc.identifier.urihttp://hdl.handle.net/1893/36756-
dc.description.abstractBackground: Transformer-based models are gaining popularity in medical imaging and cancer imaging applications. Many recent studies have demonstrated the use of transformer-based models for brain cancer imaging applications such as diagnosis and tumor segmentation. Objective: This study aims to review how different vision transformers (ViTs) contributed to advancing brain cancer diagnosis and tumor segmentation using brain image data. This study examines the different architectures developed for enhancing the task of brain tumor segmentation. Furthermore, it explores how the ViT-based models augmented the performance of convolutional neural networks for brain cancer imaging. Methods: This review performed the study search and study selection following the PRISMA-ScR (Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews) guidelines. The search comprised 4 popular scientific databases: PubMed, Scopus, IEEE Xplore, and Google Scholar. The search terms were formulated to cover the interventions (ie, ViTs) and the target application (ie, brain cancer imaging). The title and abstract for study selection were performed by 2 reviewers independently and validated by a third reviewer. Data extraction was performed by 2 reviewers and validated by a third reviewer. Finally, the data were synthesized using a narrative approach. Results: Of the 736 retrieved studies, 22 (3%) were included in this review. These studies were published in 2021 and 2022. The most commonly addressed task in these studies was tumor segmentation using ViTs. No study reported early detection of brain cancer. Among the different ViT architectures, Shifted Window transformer–based architectures have recently become the most popular choice of the research community. Among the included architectures, UNet transformer and TransUNet had the highest number of parameters and thus needed a cluster of as many as 8 graphics processing units for model training. The brain tumor segmentation challenge data set was the most popular data set used in the included studies. ViT was used in different combinations with convolutional neural networks to capture both the global and local context of the input brain imaging data. Conclusions: It can be argued that the computational complexity of transformer architectures is a bottleneck in advancing the field and enabling clinical transformations. This review provides the current state of knowledge on the topic, and the findings of this review will be helpful for researchers in the field of medical artificial intelligence and its applications in brain cancer.en_UK
dc.language.isoenen_UK
dc.publisherJMIR Publications Inc.en_UK
dc.relationAli H, Qureshi R & Shah Z (2023) Artificial Intelligence-Based Methods for Integrating Local and Global Features for Brain Cancer Imaging: Scoping Review. <i>JMIR Medical Informatics</i>, 11, Art. No.: e47445. https://doi.org/10.2196/47445en_UK
dc.rights©Hazrat Ali, Rizwan Qureshi, Zubair Shah. Originally published in JMIR Medical Informatics (https://medinform.jmir.org), 17.11.2023. This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR Medical Informatics, is properly cited. The complete bibliographic information, a link to the original publication on https://medinform.jmir.org/, as well as this copyright and license information must be included.en_UK
dc.rights.urihttp://creativecommons.org/licenses/by/4.0/en_UK
dc.subjectartificial intelligenceen_UK
dc.subjectAIen_UK
dc.subjectbrain canceren_UK
dc.subjectbrain tumouren_UK
dc.subjectmedical imagingen_UK
dc.subjectsegmentationen_UK
dc.subjectvision transformersen_UK
dc.titleArtificial Intelligence-Based Methods for Integrating Local and Global Features for Brain Cancer Imaging: Scoping Reviewen_UK
dc.typeJournal Articleen_UK
dc.identifier.doi10.2196/47445en_UK
dc.identifier.pmid37976086en_UK
dc.citation.jtitleJMIR Medical Informaticsen_UK
dc.citation.issn2291-9694en_UK
dc.citation.volume11en_UK
dc.citation.publicationstatusPublisheden_UK
dc.citation.peerreviewedRefereeden_UK
dc.type.statusVoR - Version of Recorden_UK
dc.author.emailali.hazrat@stir.ac.uken_UK
dc.citation.date17/11/2023en_UK
dc.contributor.affiliationHamad Bin Khalifa Universityen_UK
dc.contributor.affiliationHamad Bin Khalifa Universityen_UK
dc.contributor.affiliationUniversity of Texasen_UK
dc.identifier.isiWOS:001114723500001en_UK
dc.identifier.scopusid2-s2.0-85179162228en_UK
dc.identifier.wtid2075588en_UK
dc.contributor.orcid0000-0003-3058-5794en_UK
dc.contributor.orcid0000-0002-0039-982Xen_UK
dc.contributor.orcid0000-0001-7389-3274en_UK
dc.date.accepted2023-07-12en_UK
dcterms.dateAccepted2023-07-12en_UK
dc.date.filedepositdate2025-01-27en_UK
rioxxterms.apcnot chargeden_UK
rioxxterms.versionVoRen_UK
local.rioxx.authorAli, Hazrat|0000-0003-3058-5794en_UK
local.rioxx.authorQureshi, Rizwan|0000-0002-0039-982Xen_UK
local.rioxx.authorShah, Zubair|0000-0001-7389-3274en_UK
local.rioxx.projectInternal Project|University of Stirling|https://isni.org/isni/0000000122484331en_UK
local.rioxx.freetoreaddate2025-01-27en_UK
local.rioxx.licencehttp://creativecommons.org/licenses/by/4.0/|2025-01-27|en_UK
local.rioxx.filenamemedinform-2023-1-e47445.pdfen_UK
local.rioxx.filecount1en_UK
local.rioxx.source2291-9694en_UK
Appears in Collections:Computing Science and Mathematics Journal Articles

Files in This Item:
File Description SizeFormat 
medinform-2023-1-e47445.pdfFulltext - Published Version569.1 kBAdobe PDFView/Open


This item is protected by original copyright



A file in this item is licensed under a Creative Commons License Creative Commons

Items in the Repository are protected by copyright, with all rights reserved, unless otherwise indicated.

The metadata of the records in the Repository are available under the CC0 public domain dedication: No Rights Reserved https://creativecommons.org/publicdomain/zero/1.0/

If you believe that any material held in STORRE infringes copyright, please contact library@stir.ac.uk providing details and we will remove the Work from public display in STORRE and investigate your claim.