Skip to content

NMar33/layerseg

Repository files navigation

LayerSeg - Layered Structure Segmentation Tool

Open In Colab Python 3.12+ PyTorch License: MIT

A UNet-based segmentation tool for binarizing cross-section images of layered materials. Originally developed for SEM micrographs of graphene oxide membranes, it can be applied to other layered structures with horizontal orientation.

The model was trained on synthetic data generated by SynthiLayer and is designed to highlight horizontal lamellar boundaries while suppressing vertical features. The resulting binary masks can be used for quantitative analysis of angular distributions and orientational order parameters.

Background

This tool is part of the image analysis pipeline described in:

Marnautov, N. A.; Matveev, M. V.; Gulin, A. A.; Kalai, T.; Bognar, B.; Rebrikova, A. T.; Chumakova, N. A. Orientational Ordering of Graphene Oxide Membranes by a Spin Probe Technique and SEM Image Analysis. J. Phys. Chem. C 2024, 128 (6), 2543-2550. DOI: 10.1021/acs.jpcc.3c07127

The full analysis pipeline involves:

  1. Segmentation of cross-section SEM micrographs using a pre-trained UNet (this tool)
  2. Binarization of the segmentation mask by expert-selected threshold
  3. Angular distribution analysis of lamellar boundaries using the OrientLayer algorithm

Because the model was trained on synthetic data, there is an inherent domain shift between synthetic and real images. The tool generates masks at multiple threshold values so that an expert can visually compare them with the original image and select the optimal one.

Related Resources

Quick Start (Google Colab)

The easiest way to use this tool is via Google Colab - no local setup required:

Open in Colab

Local Installation

Requirements

  • Python 3.12+
  • PyTorch 2.11+ (CPU or CUDA)

Setup

# 1. Clone the repository
git clone https://github.com/NMar33/layerseg.git
cd layerseg

# 2. Install dependencies
pip install -r requirements.txt

# 3. Download the pre-trained model (~118 MB)
python download_model.py

For GPU support, edit requirements.txt and replace +cpu with +cu121 or +cu124 in the torch/torchvision lines.

Usage

CLI

python src/binarizer_cli.py -cfg configs/config.yaml

Place your images in data/example_imgs/ (or change path_imgs_dir in the config).

The tool generates reports in the reports/ directory:

  • Soft probability masks - continuous segmentation output for each scale factor
  • Binary masks at multiple thresholds - the expert selects the best one by visual comparison with the original image
  • Side-by-side comparison visualizations (PNG)
  • Combined PDF report

Python API

import sys
sys.path.insert(0, "src")
from entities import read_binarizer_params
from binarizer_pipeline import binarizer_pipeline

params = read_binarizer_params("configs/config.yaml")
binarizer_pipeline(params)

Configuration

All settings are in configs/config.yaml:

Parameter Default Description
path_imgs_dir data/example_imgs Input images directory
model_name unet_220805.pth Pre-trained model filename
scale_factors [2, 1, 0.5] Image scaling multipliers (>1 upscales, useful for small images)
gaussian_blur False Apply Gaussian blur preprocessing (can help with noisy images)
gaussian_blur_kernel_size 5 Blur kernel size (odd number, 3-15)
binarizer_thresholds [0.6, ..., 0.9] Threshold sweep - generates a mask for each value so the expert can pick the best
color_interest black Which color represents detected boundaries: black or white
device cpu Computing device: cpu or cuda
cache True Cache smart contrast computation (speeds up repeated runs)

How It Works

  1. Load grayscale image and create scaled variants (multi-scale processing)
  2. Smart Contrast Preprocessing creates a 3-channel input from single-channel grayscale:
    • Channel 0: Original grayscale values
    • Channel 1: Local contrast normalization (3x3 sliding window)
    • Channel 2: Inverted local contrast normalization (7x7 sliding window)
  3. UNet Inference - the model produces a soft probability mask where each pixel represents the likelihood of belonging to interlamellar space
  4. Multi-threshold binarization - masks are generated at several threshold values, allowing the expert to select the one that best captures lamellar boundaries
  5. Report generation - visualizations and PDF for comparison

Pre-trained Model

  • Architecture: UNet (31M parameters, no BatchNorm)
  • Training data: Synthetic layered images from SynthiLayer
  • Specialization: Detects horizontal lamellar boundaries; suppresses vertical features
  • Input: 3 channels (smart contrast layers, not RGB)
  • Output: 2 classes (interlamellar space / other)
  • File: pretrained_models/unet_220805.pth (~118 MB)
  • Download: python download_model.py or from GitHub Releases

Project Structure

├── src/                    # Source code
│   ├── binarizer_cli.py    # CLI entry point
│   ├── binarizer_pipeline.py
│   ├── binarizers/         # Model, preprocessing, inference
│   ├── entities/           # Configuration dataclass
│   ├── reports/            # Visualization and PDF generation
│   └── utils/              # Logging setup
├── configs/                # YAML configuration files
├── assessment/             # Synthetic benchmarking framework
├── tests/                  # 144 pytest tests
├── download_model.py       # Model download script
├── pretrained_models/      # Model storage (gitignored)
└── data/                   # Input images

Running Tests

pip install -r requirements-dev.txt
python -m pytest tests/ -v

Citation

If you use this tool in your research, please cite:

@article{marnautov2024orientational,
  title={Orientational Ordering of Graphene Oxide Membranes by a Spin Probe Technique and SEM Image Analysis},
  author={Marnautov, Nikolai A. and Matveev, Mikhail V. and Gulin, Alexander A. and K{\'a}lai, Tam{\'a}s and Bogn{\'a}r, Bal{\'a}zs and Rebrikova, Anastasiya T. and Chumakova, Natalia A.},
  journal={The Journal of Physical Chemistry C},
  volume={128},
  number={6},
  pages={2543--2550},
  year={2024},
  publisher={American Chemical Society},
  doi={10.1021/acs.jpcc.3c07127}
}