Please use this identifier to cite or link to this item:
http://hdl.handle.net/1893/36632
Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Passos, Leandro A | en_UK |
dc.contributor.author | Papa, João Paulo | en_UK |
dc.contributor.author | Hussain, Amir | en_UK |
dc.contributor.author | Adeel, Ahsan | en_UK |
dc.date.accessioned | 2025-03-05T01:15:35Z | - |
dc.date.available | 2025-03-05T01:15:35Z | - |
dc.date.issued | 2023-03-28 | en_UK |
dc.identifier.uri | http://hdl.handle.net/1893/36632 | - |
dc.description.abstract | Despite the recent success of machine learning algorithms, most models face drawbacks when considering more complex tasks requiring interaction between different sources, such as multimodal input data and logical time sequences. On the other hand, the biological brain is highly sharpened in this sense, empowered to automatically manage and integrate such streams of information. In this context, this work draws inspiration from recent discoveries in brain cortical circuits to propose a more biologically plausible self-supervised machine learning approach. This combines multimodal information using intra-layer modulations together with Canonical Correlation Analysis, and a memory mechanism to keep track of temporal data, the overall approach termed Canonical Cortical Graph Neural networks. This is shown to outperform recent state-of-the-art models in terms of clean audio reconstruction and energy efficiency for a benchmark audio-visual speech dataset. The enhanced performance is demonstrated through a reduced and smother neuron firing rate distribution. suggesting that the proposed model is amenable for speech enhancement in future audio-visual hearing aid devices. | en_UK |
dc.language.iso | en | en_UK |
dc.publisher | Elsevier BV | en_UK |
dc.relation | Passos LA, Papa JP, Hussain A & Adeel A (2023) Canonical cortical graph neural networks and its application for speech enhancement in audio-visual hearing aids. <i>Neurocomputing</i>, 527, pp. 196-203. https://doi.org/10.1016/j.neucom.2022.11.081 | en_UK |
dc.rights | This is an open access article distributed under the terms of the Creative Commons CC-BY license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. You are not required to obtain permission to reuse this article. | en_UK |
dc.rights.uri | http://creativecommons.org/licenses/by/4.0/ | en_UK |
dc.subject | Cortical circuits | en_UK |
dc.subject | Canonical correlation analysis | en_UK |
dc.subject | Multimodal learning | en_UK |
dc.subject | Graph neural network | en_UK |
dc.subject | Prior frames neighbourhood | en_UK |
dc.subject | Positional encoding | en_UK |
dc.title | Canonical cortical graph neural networks and its application for speech enhancement in audio-visual hearing aids | en_UK |
dc.type | Journal Article | en_UK |
dc.identifier.doi | 10.1016/j.neucom.2022.11.081 | en_UK |
dc.citation.jtitle | Neurocomputing | en_UK |
dc.citation.issn | 0925-2312 | en_UK |
dc.citation.volume | 527 | en_UK |
dc.citation.spage | 196 | en_UK |
dc.citation.epage | 203 | en_UK |
dc.citation.publicationstatus | Published | en_UK |
dc.citation.peerreviewed | Refereed | en_UK |
dc.type.status | VoR - Version of Record | en_UK |
dc.contributor.funder | EPSRC Engineering and Physical Sciences Research Council | en_UK |
dc.contributor.funder | EPSRC Engineering and Physical Sciences Research Council | en_UK |
dc.contributor.funder | EPSRC Engineering and Physical Sciences Research Council | en_UK |
dc.contributor.funder | EPSRC Engineering and Physical Sciences Research Council | en_UK |
dc.contributor.funder | EPSRC Engineering and Physical Sciences Research Council | en_UK |
dc.contributor.funder | EPSRC Engineering and Physical Sciences Research Council | en_UK |
dc.contributor.funder | EPSRC Engineering and Physical Sciences Research Council | en_UK |
dc.contributor.funder | EPSRC Engineering and Physical Sciences Research Council | en_UK |
dc.contributor.funder | EPSRC Engineering and Physical Sciences Research Council | en_UK |
dc.author.email | ahsan.adeel1@stir.ac.uk | en_UK |
dc.citation.date | 08/12/2022 | en_UK |
dc.contributor.affiliation | University of Wolverhampton | en_UK |
dc.contributor.affiliation | Sao Paulo State University (Universidade Estadual Paulista) | en_UK |
dc.contributor.affiliation | Edinburgh Napier University | en_UK |
dc.contributor.affiliation | University of Wolverhampton | en_UK |
dc.identifier.isi | WOS:000925472300001 | en_UK |
dc.identifier.scopusid | 2-s2.0-85146713633 | en_UK |
dc.identifier.wtid | 2084833 | en_UK |
dc.date.accepted | 2022-11-21 | en_UK |
dcterms.dateAccepted | 2022-11-21 | en_UK |
dc.date.filedepositdate | 2025-02-28 | en_UK |
dc.relation.funderproject | COG-MHEAR: Towards cognitively-inspired 5G-IoT enabled, multi-modal Hearing Aids | en_UK |
dc.relation.funderref | 1753817 | en_UK |
rioxxterms.apc | not required | en_UK |
rioxxterms.version | VoR | en_UK |
local.rioxx.author | Passos, Leandro A| | en_UK |
local.rioxx.author | Papa, João Paulo| | en_UK |
local.rioxx.author | Hussain, Amir| | en_UK |
local.rioxx.author | Adeel, Ahsan| | en_UK |
local.rioxx.project | 1753817|Engineering and Physical Sciences Research Council|http://dx.doi.org/10.13039/501100000266 | en_UK |
local.rioxx.freetoreaddate | 2025-02-28 | en_UK |
local.rioxx.licence | http://creativecommons.org/licenses/by/4.0/|2025-02-28| | en_UK |
local.rioxx.filename | 1-s2.0-S0925231222014758-main.pdf | en_UK |
local.rioxx.filecount | 1 | en_UK |
local.rioxx.source | 0925-2312 | en_UK |
Appears in Collections: | Computing Science and Mathematics Journal Articles |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
1-s2.0-S0925231222014758-main.pdf | Fulltext - Published Version | 1.53 MB | Adobe PDF | View/Open |
This item is protected by original copyright |
A file in this item is licensed under a Creative Commons License
Items in the Repository are protected by copyright, with all rights reserved, unless otherwise indicated.
The metadata of the records in the Repository are available under the CC0 public domain dedication: No Rights Reserved https://creativecommons.org/publicdomain/zero/1.0/
If you believe that any material held in STORRE infringes copyright, please contact library@stir.ac.uk providing details and we will remove the Work from public display in STORRE and investigate your claim.