Please use this identifier to cite or link to this item: http://hdl.handle.net/1893/35587
Full metadata record
DC FieldValueLanguage
dc.contributor.authorGoncalves, Diogo Nunesen_UK
dc.contributor.authorJunior, Jose Marcatoen_UK
dc.contributor.authorZamboni, Pedroen_UK
dc.contributor.authorPistori, Hemersonen_UK
dc.contributor.authorLi, Jonathanen_UK
dc.contributor.authorNogueira, Keilleren_UK
dc.contributor.authorGoncalves, Wesleyen_UK
dc.date.accessioned2023-11-29T01:01:47Z-
dc.date.available2023-11-29T01:01:47Z-
dc.date.issued2023en_UK
dc.identifier.urihttp://hdl.handle.net/1893/35587-
dc.description.abstractMulti-task learning has proven to be effective in improving the performance of correlated tasks. Most of the existing methods use a backbone to extract initial features with independent branches for each task, and the exchange of information between the branches usually occurs through the concatenation or sum of the feature maps of the branches. However, this type of information exchange does not directly consider the local characteristics of the image nor the level of importance or correlation between the tasks. In this paper, we propose a semantic segmentation method, MTLSegFormer, which combines multi-task learning and attention mechanisms. After the backbone feature extraction, two feature maps are learned for each task. The first map is proposed to learn features related to its task, while the second map is obtained by applying learned visual attention to locally re-weigh the feature maps of the other tasks. In this way, weights are assigned to local regions of the image of other tasks that have greater importance for the specific task. Finally, the two maps are combined and used to solve a task. We tested the performance in two challenging problems with correlated tasks and observed a significant improvement in accuracy, mainly in tasks with high dependence on the others.en_UK
dc.language.isoenen_UK
dc.publisherIEEEen_UK
dc.relationGoncalves DN, Junior JM, Zamboni P, Pistori H, Li J, Nogueira K & Goncalves W (2023) MTLSegFormer: Multi-Task Learning With Transformers for Semantic Segmentation in Precision Agriculture. In: <i>Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops</i>. 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Vancouver, BC, Canada, 18.06.2023-22.06.2023. Piscataway, NJ, USA: IEEE, pp. 6290-6298. https://doi.org/10.1109/CVPRW59228.2023.00669en_UK
dc.rights© 2023 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.en_UK
dc.subjectVisualizationen_UK
dc.subjectShape , Semantic segmentationen_UK
dc.subjectFeature extractionen_UK
dc.subjectTransformersen_UK
dc.subjectMultitaskingen_UK
dc.subjectDecodingen_UK
dc.titleMTLSegFormer: Multi-Task Learning With Transformers for Semantic Segmentation in Precision Agricultureen_UK
dc.typeConference Paperen_UK
dc.identifier.doi10.1109/CVPRW59228.2023.00669en_UK
dc.citation.issn2160-7516en_UK
dc.citation.spage6290en_UK
dc.citation.epage6298en_UK
dc.citation.publicationstatusPublisheden_UK
dc.type.statusAM - Accepted Manuscripten_UK
dc.contributor.funderBrazilian National Research Councilen_UK
dc.author.emailkeiller.nogueira@stir.ac.uken_UK
dc.citation.btitleProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshopsen_UK
dc.citation.conferencedates2023-06-18 - 2023-06-22en_UK
dc.citation.conferencelocationVancouver, BC, Canadaen_UK
dc.citation.conferencename2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)en_UK
dc.citation.date14/08/2023en_UK
dc.citation.isbn979-8-3503-0249-3en_UK
dc.publisher.addressPiscataway, NJ, USAen_UK
dc.contributor.affiliationFederal University of Mato Grosso do Sulen_UK
dc.contributor.affiliationFederal University of Mato Grosso do Sulen_UK
dc.contributor.affiliationFederal University of Mato Grosso do Sulen_UK
dc.contributor.affiliationDom Bosco Catholic Universityen_UK
dc.contributor.affiliationUniversity of Waterlooen_UK
dc.contributor.affiliationComputing Scienceen_UK
dc.contributor.affiliationFederal University of Mato Grosso do Sulen_UK
dc.identifier.wtid1958992en_UK
dc.contributor.orcid0000-0003-3308-6384en_UK
dc.date.accepted2023-04-30en_UK
dcterms.dateAccepted2023-04-30en_UK
dc.date.filedepositdate2023-11-27en_UK
rioxxterms.apcnot requireden_UK
rioxxterms.typeConference Paper/Proceeding/Abstracten_UK
rioxxterms.versionAMen_UK
local.rioxx.authorGoncalves, Diogo Nunes|en_UK
local.rioxx.authorJunior, Jose Marcato|en_UK
local.rioxx.authorZamboni, Pedro|en_UK
local.rioxx.authorPistori, Hemerson|en_UK
local.rioxx.authorLi, Jonathan|en_UK
local.rioxx.authorNogueira, Keiller|0000-0003-3308-6384en_UK
local.rioxx.authorGoncalves, Wesley|en_UK
local.rioxx.projectProject ID unknown|Brazilian National Research Council|en_UK
local.rioxx.freetoreaddate2023-11-28en_UK
local.rioxx.licencehttp://www.rioxx.net/licenses/all-rights-reserved|2023-11-28|en_UK
local.rioxx.filenameNunesGoncalves-etal-CVPR-2023.pdfen_UK
local.rioxx.filecount1en_UK
local.rioxx.source979-8-3503-0249-3en_UK
Appears in Collections:Computing Science and Mathematics Conference Papers and Proceedings

Files in This Item:
File Description SizeFormat 
NunesGoncalves-etal-CVPR-2023.pdfFulltext - Accepted Version10.01 MBAdobe PDFView/Open


This item is protected by original copyright



Items in the Repository are protected by copyright, with all rights reserved, unless otherwise indicated.

The metadata of the records in the Repository are available under the CC0 public domain dedication: No Rights Reserved https://creativecommons.org/publicdomain/zero/1.0/

If you believe that any material held in STORRE infringes copyright, please contact library@stir.ac.uk providing details and we will remove the Work from public display in STORRE and investigate your claim.