LoveDA: A Remote Sensing Land-Cover Dataset for Domain Adaptive Semantic Segmentation
Junjue Wang, Zhuo Zheng, Ailong Ma, Xiaoyan Lu, and Yanfei Zhong
by[Paper
],
[Video
],
[Dataset
],
[BibTeX
],
[Leaderboard-SEG
],
[Leaderboard-UDA
]
News
-
2021/12/13, Pre-trained urls for HRNet have been updated.
-
2021/12/10, LoveDA has been included in Torchgeo.
-
2021/11/30, The contests have been moved to new server: LoveDA Semantic Segmentation Challenge, LoveDA Unsupervised Domain Adaptation Challenge.
-
2021/11/11, LoveDA has been included in MMsegmentation. 🔥🔥 The Semantic Segmentation task can be prepared follow dataset_prepare.md.🔥🔥
Highlights
- 5987 high spatial resolution (0.3 m) remote sensing images from Nanjing, Changzhou, and Wuhan
- Focus on different geographical environments between Urban and Rural
- Advance both semantic segmentation and domain adaptation tasks
- Three considerable challenges:
- Multi-scale objects
- Complex background samples
- Inconsistent class distributions
Citation
If you use LoveDA in your research, please cite our coming NeurIPS2021 paper.
@inproceedings{NEURIPS DATASETS AND BENCHMARKS2021_4e732ced,
author = {Wang, Junjue and Zheng, Zhuo and Ma, Ailong and Lu, Xiaoyan and Zhong, Yanfei},
booktitle = {Proceedings of the Neural Information Processing Systems Track on Datasets and Benchmarks},
editor = {J. Vanschoren and S. Yeung},
pages = {},
publisher = {Curran Associates, Inc.},
title = {LoveDA: A Remote Sensing Land-Cover Dataset for Domain Adaptive Semantic Segmentation},
url = {https://datasets-benchmarks-proceedings.neurips.cc/paper_files/paper/2021/file/4e732ced3463d06de0ca9a15b6153677-Paper-round2.pdf},
volume = {1},
year = {2021}
}
@dataset{junjue_wang_2021_5706578,
author={Junjue Wang and Zhuo Zheng and Ailong Ma and Xiaoyan Lu and Yanfei Zhong},
title={Love{DA}: A Remote Sensing Land-Cover Dataset for Domain Adaptive Semantic Segmentation},
month=oct,
year=2021,
publisher={Zenodo},
doi={10.5281/zenodo.5706578},
url={https://doi.org/10.5281/zenodo.5706578}
}
Dataset and Contest
The LoveDA dataset is released at Zenodo, Google Drive and Baidu Drive Code: 27vc
You can develop your models on Train and Validation sets.
Category labels: background – 1, building – 2, road – 3, water – 4, barren – 5,forest – 6, agriculture – 7. And the no-data regions were assigned 0 which should be ignored. The provided data loader will help you construct your pipeline.
Submit your test results on LoveDA Semantic Segmentation Challenge, LoveDA Unsupervised Domain Adaptation Challenge. You will get your Test scores smoothly.
Feel free to design your own models, and we are looking forward to your exciting results!
License
The owners of the data and of the copyright on the data are RSIDEA, Wuhan University. Use of the Google Earth images must respect the "Google Earth" terms of use. All images and their associated annotations in LoveDA can be used for academic purposes only, but any commercial use is prohibited.