Please use this identifier to cite or link to this item:
http://hdl.handle.net/1893/30394
Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Nogueira, Keiller | en_UK |
dc.contributor.author | Dalla Mura, Mauro | en_UK |
dc.contributor.author | Chanussot, Jocelyn | en_UK |
dc.contributor.author | Schwartz, William Robson | en_UK |
dc.contributor.author | dos Santos, Jefersson A | en_UK |
dc.date.accessioned | 2019-11-01T01:01:47Z | - |
dc.date.available | 2019-11-01T01:01:47Z | - |
dc.date.issued | 2016-12 | en_UK |
dc.identifier.uri | http://hdl.handle.net/1893/30394 | - |
dc.description.abstract | Land cover classification is a task that requires methods capable of learning high-level features while dealing with high volume of data. Overcoming these challenges, Convolutional Networks (ConvNets) can learn specific and adaptable features depending on the data while, at the same time, learn classifiers. In this work, we propose a novel technique to automatically perform pixel-wise land cover classification. To the best of our knowledge, there is no other work in the literature that perform pixel-wise semantic segmentation based on data-driven feature descriptors for high-resolution remote sensing images. The main idea is to exploit the power of ConvNet feature representations to learn how to semantically segment remote sensing images. First, our method learns each label in a pixel-wise manner by taking into account the spatial context of each pixel. In a predicting phase, the probability of a pixel belonging to a class is also estimated according to its spatial context and the learned patterns. We conducted a systematic evaluation of the proposed algorithm using two remote sensing datasets with very distinct properties. Our results show that the proposed algorithm provides improvements when compared to traditional and state-of-the-art methods that ranges from 5 to 15% in terms of accuracy. | en_UK |
dc.language.iso | en | en_UK |
dc.publisher | IEEE | en_UK |
dc.relation | Nogueira K, Dalla Mura M, Chanussot J, Schwartz WR & dos Santos JA (2016) Learning to semantically segment high-resolution remote sensing images. In: <i>2016 23rd International Conference on Pattern Recognition (ICPR) Proceedings</i>. 2016 23rd International Conference on Pattern Recognition (ICPR), Cancun, Mexico, 04.12.2016-08.12.2016. Piscataway, NJ: IEEE, pp. 3566-3571. https://doi.org/10.1109/icpr.2016.7900187 | en_UK |
dc.rights | © 2017 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. | en_UK |
dc.subject | Land-cover Mapping | en_UK |
dc.subject | Pixel-wise Classification | en_UK |
dc.subject | Semantic Segmentation | en_UK |
dc.subject | Deep Learning | en_UK |
dc.subject | Remote Sensing | en_UK |
dc.subject | Feature Learning | en_UK |
dc.subject | High-resolution Images | en_UK |
dc.title | Learning to semantically segment high-resolution remote sensing images | en_UK |
dc.type | Conference Paper | en_UK |
dc.identifier.doi | 10.1109/icpr.2016.7900187 | en_UK |
dc.citation.spage | 3566 | en_UK |
dc.citation.epage | 3571 | en_UK |
dc.citation.publicationstatus | Published | en_UK |
dc.type.status | AM - Accepted Manuscript | en_UK |
dc.contributor.funder | Brazilian National Research Council | en_UK |
dc.citation.btitle | 2016 23rd International Conference on Pattern Recognition (ICPR) Proceedings | en_UK |
dc.citation.conferencedates | 2016-12-04 - 2016-12-08 | en_UK |
dc.citation.conferencelocation | Cancun, Mexico | en_UK |
dc.citation.conferencename | 2016 23rd International Conference on Pattern Recognition (ICPR) | en_UK |
dc.citation.date | 24/04/2017 | en_UK |
dc.citation.isbn | 978-1-5090-4848-9 | en_UK |
dc.citation.isbn | 9781509048472 | en_UK |
dc.publisher.address | Piscataway, NJ | en_UK |
dc.contributor.affiliation | Federal University of Minas Gerais | en_UK |
dc.contributor.affiliation | Universite de Grenoble | en_UK |
dc.contributor.affiliation | Universite de Grenoble | en_UK |
dc.contributor.affiliation | Federal University of Minas Gerais | en_UK |
dc.contributor.affiliation | Federal University of Minas Gerais | en_UK |
dc.identifier.scopusid | 2-s2.0-85019077911 | en_UK |
dc.identifier.wtid | 1469459 | en_UK |
dc.contributor.orcid | 0000-0003-3308-6384 | en_UK |
dc.contributor.orcid | 0000-0002-8889-1586 | en_UK |
dc.date.accepted | 2016-07-13 | en_UK |
dcterms.dateAccepted | 2016-07-13 | en_UK |
dc.date.filedepositdate | 2019-10-31 | en_UK |
rioxxterms.apc | not required | en_UK |
rioxxterms.type | Conference Paper/Proceeding/Abstract | en_UK |
rioxxterms.version | AM | en_UK |
local.rioxx.author | Nogueira, Keiller|0000-0003-3308-6384 | en_UK |
local.rioxx.author | Dalla Mura, Mauro| | en_UK |
local.rioxx.author | Chanussot, Jocelyn| | en_UK |
local.rioxx.author | Schwartz, William Robson| | en_UK |
local.rioxx.author | dos Santos, Jefersson A|0000-0002-8889-1586 | en_UK |
local.rioxx.project | Project ID unknown|Brazilian National Research Council| | en_UK |
local.rioxx.freetoreaddate | 2019-10-31 | en_UK |
local.rioxx.licence | http://www.rioxx.net/licenses/all-rights-reserved|2019-10-31| | en_UK |
local.rioxx.filename | paper_2016_ICPR_Nogueira.pdf | en_UK |
local.rioxx.filecount | 1 | en_UK |
local.rioxx.source | 9781509048472 | en_UK |
Appears in Collections: | Computing Science and Mathematics Conference Papers and Proceedings |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
paper_2016_ICPR_Nogueira.pdf | Fulltext - Accepted Version | 4.97 MB | Adobe PDF | View/Open |
This item is protected by original copyright |
Items in the Repository are protected by copyright, with all rights reserved, unless otherwise indicated.
The metadata of the records in the Repository are available under the CC0 public domain dedication: No Rights Reserved https://creativecommons.org/publicdomain/zero/1.0/
If you believe that any material held in STORRE infringes copyright, please contact library@stir.ac.uk providing details and we will remove the Work from public display in STORRE and investigate your claim.