This is the code for our weakly supervised building segmentation paper, accepted at IGARSS 2019.
Paper: Weakly Supervised Building Segmentation From Aerial Images
We use the disaster response dataset released as the Mapping Challenge. You should be able to get the data from this website. If you have any trouble acquiring the dataset, please contact us.
Create and activate the conda environment:
conda env create -f environment.yml
conda activate wbsegAll settings are stored in wbseg/config.py. Edit that file to set the dataset path (ROOT_DIR), output directory (DIRECTORY), supervision mode, loss function, batch size, and other training options.
Train a model:
python -m wbseg.trainThis trains the U-Net model and saves the trained weights, loss curves, and metrics to the directory specified by DIRECTORY in config.py.
Visualize results:
python -m wbseg.visualize_trainedThis loads the trained model and saves prediction figures to the same output directory.
Set SUPERVISION in wbseg/config.py to one of:
| Value | Description |
|---|---|
Gaussian |
Dense masks derived from bounding boxes using a bivariate Gaussian (default) |
Naive |
All pixels inside bounding boxes set to foreground |
GrabCut |
OpenCV GrabCut applied within each bounding box |
Full |
Full supervision using ground-truth segmentation masks (upper bound) |
Set LOSS_FN in wbseg/config.py to one of:
| Value | Description |
|---|---|
Proposed_OneSided |
Proposed one-sided loss (default) |
CE |
Standard binary cross-entropy |
If you find this paper or code helpful, please cite:
M. Usman Rafique, Nathan Jacobs, "Weakly Supervised Building Segmentation From Aerial Images",
In: IEEE International Geoscience and Remote Sensing Symposium (IGARSS), 2019.
Please feel free to contact us with any questions or comments.
The code is provided for academic purposes only without any guarantees.