Please use this identifier to cite or link to this item: http://hdl.handle.net/1893/36632
Full metadata record
DC FieldValueLanguage
dc.contributor.authorPassos, Leandro Aen_UK
dc.contributor.authorPapa, João Pauloen_UK
dc.contributor.authorHussain, Amiren_UK
dc.contributor.authorAdeel, Ahsanen_UK
dc.date.accessioned2025-03-05T01:15:35Z-
dc.date.available2025-03-05T01:15:35Z-
dc.date.issued2023-03-28en_UK
dc.identifier.urihttp://hdl.handle.net/1893/36632-
dc.description.abstractDespite the recent success of machine learning algorithms, most models face drawbacks when considering more complex tasks requiring interaction between different sources, such as multimodal input data and logical time sequences. On the other hand, the biological brain is highly sharpened in this sense, empowered to automatically manage and integrate such streams of information. In this context, this work draws inspiration from recent discoveries in brain cortical circuits to propose a more biologically plausible self-supervised machine learning approach. This combines multimodal information using intra-layer modulations together with Canonical Correlation Analysis, and a memory mechanism to keep track of temporal data, the overall approach termed Canonical Cortical Graph Neural networks. This is shown to outperform recent state-of-the-art models in terms of clean audio reconstruction and energy efficiency for a benchmark audio-visual speech dataset. The enhanced performance is demonstrated through a reduced and smother neuron firing rate distribution. suggesting that the proposed model is amenable for speech enhancement in future audio-visual hearing aid devices.en_UK
dc.language.isoenen_UK
dc.publisherElsevier BVen_UK
dc.relationPassos LA, Papa JP, Hussain A & Adeel A (2023) Canonical cortical graph neural networks and its application for speech enhancement in audio-visual hearing aids. <i>Neurocomputing</i>, 527, pp. 196-203. https://doi.org/10.1016/j.neucom.2022.11.081en_UK
dc.rightsThis is an open access article distributed under the terms of the Creative Commons CC-BY license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. You are not required to obtain permission to reuse this article.en_UK
dc.rights.urihttp://creativecommons.org/licenses/by/4.0/en_UK
dc.subjectCortical circuitsen_UK
dc.subjectCanonical correlation analysisen_UK
dc.subjectMultimodal learningen_UK
dc.subjectGraph neural networken_UK
dc.subjectPrior frames neighbourhooden_UK
dc.subjectPositional encodingen_UK
dc.titleCanonical cortical graph neural networks and its application for speech enhancement in audio-visual hearing aidsen_UK
dc.typeJournal Articleen_UK
dc.identifier.doi10.1016/j.neucom.2022.11.081en_UK
dc.citation.jtitleNeurocomputingen_UK
dc.citation.issn0925-2312en_UK
dc.citation.volume527en_UK
dc.citation.spage196en_UK
dc.citation.epage203en_UK
dc.citation.publicationstatusPublisheden_UK
dc.citation.peerreviewedRefereeden_UK
dc.type.statusVoR - Version of Recorden_UK
dc.contributor.funderEPSRC Engineering and Physical Sciences Research Councilen_UK
dc.contributor.funderEPSRC Engineering and Physical Sciences Research Councilen_UK
dc.contributor.funderEPSRC Engineering and Physical Sciences Research Councilen_UK
dc.contributor.funderEPSRC Engineering and Physical Sciences Research Councilen_UK
dc.contributor.funderEPSRC Engineering and Physical Sciences Research Councilen_UK
dc.contributor.funderEPSRC Engineering and Physical Sciences Research Councilen_UK
dc.contributor.funderEPSRC Engineering and Physical Sciences Research Councilen_UK
dc.contributor.funderEPSRC Engineering and Physical Sciences Research Councilen_UK
dc.contributor.funderEPSRC Engineering and Physical Sciences Research Councilen_UK
dc.author.emailahsan.adeel1@stir.ac.uken_UK
dc.citation.date08/12/2022en_UK
dc.contributor.affiliationUniversity of Wolverhamptonen_UK
dc.contributor.affiliationSao Paulo State University (Universidade Estadual Paulista)en_UK
dc.contributor.affiliationEdinburgh Napier Universityen_UK
dc.contributor.affiliationUniversity of Wolverhamptonen_UK
dc.identifier.isiWOS:000925472300001en_UK
dc.identifier.scopusid2-s2.0-85146713633en_UK
dc.identifier.wtid2084833en_UK
dc.date.accepted2022-11-21en_UK
dcterms.dateAccepted2022-11-21en_UK
dc.date.filedepositdate2025-02-28en_UK
dc.relation.funderprojectCOG-MHEAR: Towards cognitively-inspired 5G-IoT enabled, multi-modal Hearing Aidsen_UK
dc.relation.funderref1753817en_UK
rioxxterms.apcnot requireden_UK
rioxxterms.versionVoRen_UK
local.rioxx.authorPassos, Leandro A|en_UK
local.rioxx.authorPapa, João Paulo|en_UK
local.rioxx.authorHussain, Amir|en_UK
local.rioxx.authorAdeel, Ahsan|en_UK
local.rioxx.project1753817|Engineering and Physical Sciences Research Council|http://dx.doi.org/10.13039/501100000266en_UK
local.rioxx.freetoreaddate2025-02-28en_UK
local.rioxx.licencehttp://creativecommons.org/licenses/by/4.0/|2025-02-28|en_UK
local.rioxx.filename1-s2.0-S0925231222014758-main.pdfen_UK
local.rioxx.filecount1en_UK
local.rioxx.source0925-2312en_UK
Appears in Collections:Computing Science and Mathematics Journal Articles

Files in This Item:
File Description SizeFormat 
1-s2.0-S0925231222014758-main.pdfFulltext - Published Version1.53 MBAdobe PDFView/Open


This item is protected by original copyright



A file in this item is licensed under a Creative Commons License Creative Commons

Items in the Repository are protected by copyright, with all rights reserved, unless otherwise indicated.

The metadata of the records in the Repository are available under the CC0 public domain dedication: No Rights Reserved https://creativecommons.org/publicdomain/zero/1.0/

If you believe that any material held in STORRE infringes copyright, please contact library@stir.ac.uk providing details and we will remove the Work from public display in STORRE and investigate your claim.