Skip to content

OpenHCSDev/OpenHCS

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

2,007 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

  ___                    _   _  ___  _____
 / _ \ _ __  ___  _ ___ | | | |/ __\/ ___/
| | | | '_ \/ _ \| '_  \| |_| | |   \___ \
| |_| | |_||| __/| | | ||  _  | |__  __/ |
 \___/| .__/\___||_| |_||_| |_|\___/\____/
      |_|           High-Content Screening

Bioimage analysis platform for high-content screening
Compile-time validation Β· Bidirectional GUI↔Code Β· Multi-GPU Β· LLM pipeline generation Β· 574+ functions

PyPI version License: MIT Python 3.11+ GPU Accelerated Documentation


OpenHCS processes large microscopy datasets (100GB+) with a compile-then-execute architecture. Pipelines are validated across all wells before any processing starts, preventing failures after hours of computation. Design pipelines in the GUI, export to Python, edit as code, and re-import β€” switching seamlessly between visual and programmatic workflows.

graph LR
    subgraph Microscopes
        IX[ImageXpress]
        OP[Opera Phenix]
        OM[OMERO]
    end

    subgraph OpenHCS Platform
        PD["Pipeline Designer<br/>(GUI ⇄ Code ⇄ LLM)"]
        CO["5-Phase Compiler<br/>(validate)"]
        EX["Multi-Process Executor<br/>(1 process/well Β· multi-GPU)"]
        FN["574+ Unified Functions<br/>scikit-image Β· CuPy Β· pyclesperanto<br/>PyTorch Β· JAX Β· TF Β· CuCIM Β· custom"]
        PS["PolyStore<br/>(Memory ↔ Disk ↔ ZARR ↔ Stream)"]
    end

    subgraph Viewers
        NA[Napari]
        FJ[Fiji/ImageJ]
    end

    IX --> PD
    OP --> PD
    OM --> PD
    PD --> CO --> EX
    EX --> FN --> PS
    PS --> NA
    PS --> FJ
Loading

⚑ Key Capabilities

πŸ›‘οΈ Compile-Time Validation

Pipelines are compiled through 5 phases before execution β€” path planning, store declaration, materialization flags, memory contract validation, GPU assignment β€” then frozen into immutable contexts. Errors surface immediately, not after hours of processing.

πŸ”„ Bidirectional GUI ↔ Code

Design pipelines visually, export as executable Python, edit in your IDE, re-import to the GUI. Code generation works at any scope level β€” function patterns, individual steps, pipeline configs, full orchestrator scripts β€” any window holding objects can generate and re-import code.

🧠 LLM Pipeline Generation

Describe a pipeline in natural language and get executable code. Built-in chat panel with local Ollama or remote LLM endpoints. Dynamic system prompts built from the actual function registry β€” the LLM knows every available function and its signature.

⚑ Full Multiprocessing & Multi-GPU

1 process per well via ProcessPoolExecutor with GPU scheduler assigning devices to workers. CUDA spawn-safe. Scales from laptops to multi-GPU workstations β€” process 100GB+ datasets with OME-ZARR compression (LZ4, ZSTD, Blosc).

πŸ”Œ Any Python Function

Register any Python function by decorating it with @numpy, @cupy, @pyclesperanto, @torch, or other memory type decorators. Custom functions get automatic contract validation, UI integration, and appear alongside built-in functions. Persisted to ~/.openhcs/custom_functions/.

πŸ“Š Results Materialization

@special_outputs declaratively routes analysis results to CSV, JSON, ROI ZIP (ImageJ-compatible), or TIFF stacks via pluggable format writers. ROIs stream to Fiji/ImageJ. Results appear alongside processed images with no manual I/O code.

πŸ”¬ Process-Isolated Napari & Fiji

Stream images to Napari and Fiji/ImageJ in real-time during pipeline execution. Viewers run in separate processes via ZeroMQ β€” no Qt threading conflicts. Persistent viewers survive pipeline completion. PolyStore treats viewers as streaming backends.

πŸͺŸ Live Cross-Window Updates

Edit a value in GlobalPipelineConfig β€” watch it propagate in real-time to PipelineConfig and StepConfig windows. Dual-axis resolution (context hierarchy Γ— class MRO) with scope isolation per orchestrator.


🧩 The OpenHCS Ecosystem

OpenHCS is built on 8 purpose-extracted libraries β€” each solving a general problem, each independently publishable, all woven into a cohesive platform:

graph TD
    OH["OpenHCS Platform<br/>(domain wiring + pipelines)"]

    OH --> OS["ObjectState<br/>(config)"]
    OH --> AB["ArrayBridge<br/>(arrays)"]
    OH --> PS["PolyStore<br/>(I/O + streaming)"]
    OH --> ZR["ZMQRuntime<br/>(exec)"]
    OH --> QR["PyQT-reactive<br/>(forms)"]

    OS --> PI["python-introspect<br/>(signatures)"]
    OH --> MR["metaclass-registry<br/>(plugins)"]
    OH --> PC["pycodify<br/>(serialization)"]
Loading
Library Role in OpenHCS What It Does
ObjectState Configuration framework Lazy dataclasses with dual-axis inheritance (context hierarchy Γ— class MRO) and contextvars-based resolution
ArrayBridge Memory type conversion Unified API across NumPy, CuPy, PyTorch, JAX, TensorFlow, pyclesperanto with DLPack zero-copy transfers
PolyStore Unified I/O & streaming Pluggable backends for storage (disk, memory, ZARR) and streaming (Napari, Fiji) β€” viewers are just backends. Virtual workspace, atomic writes, format detection, ROI extraction
ZMQRuntime Distributed execution ZMQ client-server for remote pipeline execution, progress streaming, and OMERO server-side processing
PyQT-reactive UI form generation React-style reactive forms from dataclasses with cross-window sync and flash animations
pycodify Code ↔ object conversion Python source as serialization format β€” type-preserving, diffable, editable, with collision handling
python-introspect Signature analysis Pure-Python function/dataclass introspection for automatic UI generation and contract analysis
metaclass-registry Plugin discovery Zero-boilerplate registry system powering microscope handler and storage backend auto-discovery

πŸ”¬ Microscope & Function Support

Microscope Systems

System Vendor
ImageXpress Molecular Devices
Opera Phenix PerkinElmer
OMERO Open Microscopy
OpenHCS Format Native

Auto-detected. Extensible via metaclass-registry.

574+ Functions β€” Automatic Discovery

Library Functions Acceleration
pyclesperanto 230+ OpenCL GPU
CuCIM/CuPy 124+ CUDA GPU
scikit-image 110+ CPU
PyTorch / JAX / TF βœ“ CUDA GPU
OpenHCS native βœ“ Mixed

Unified contracts, automatic memory conversion via ArrayBridge.

Processing domains: image preprocessing Β· segmentation Β· cell counting Β· stitching (MIST + Ashlar GPU) Β· neurite tracing Β· morphology Β· measurements


πŸš€ Quick Start

# Basic installation with GUI
pip install openhcs[gui]

# Add Napari viewer
pip install openhcs[gui,napari]

# Add Fiji/ImageJ viewer
pip install openhcs[gui,fiji]

# Add both viewers
pip install openhcs[gui,viz]

# Add GPU acceleration (CUDA 12.x required)
pip install openhcs[gui,gpu]

# Full installation (GUI + viewers + GPU)
pip install openhcs[gui,viz,gpu]

# Launch the application
openhcs
# Or use programmatically β€” real pipeline from a neuroscience experiment
from openhcs.core.orchestrator.orchestrator import PipelineOrchestrator
from openhcs.core.steps.function_step import FunctionStep
from openhcs.constants.constants import VariableComponents
from openhcs.processing.backends.processors.cupy_processor import (
    stack_percentile_normalize, tophat, create_composite
)
from openhcs.processing.backends.pos_gen.ashlar_main_gpu import ashlar_compute_tile_positions_gpu
from openhcs.processing.backends.assemblers.assemble_stack_cupy import assemble_stack_cupy
from openhcs.processing.backends.analysis.cell_counting_cpu import count_cells_single_channel

steps = [
    FunctionStep(func=[
        (stack_percentile_normalize, {'low_percentile': 1.0, 'high_percentile': 99.0}),
        (tophat, {'selem_radius': 50, 'downsample_factor': 4})
    ], name="preprocess", variable_components=[VariableComponents.SITE]),

    FunctionStep(func=[create_composite],
                 name="composite", variable_components=[VariableComponents.CHANNEL]),

    FunctionStep(func=[ashlar_compute_tile_positions_gpu],
                 name="find_positions", variable_components=[VariableComponents.SITE]),

    FunctionStep(func=[(assemble_stack_cupy, {'blend_method': 'fixed'})],
                 name="assemble", variable_components=[VariableComponents.SITE],
                 force_disk_output=True),

    FunctionStep(func=[count_cells_single_channel],
                 name="count_cells", variable_components=[VariableComponents.SITE]),
]

orchestrator = PipelineOrchestrator("path/to/plate")
orchestrator.initialize()
compiled = orchestrator.compile_pipelines(steps)  # Validates everything first
orchestrator.execute_compiled_plate(steps, compiled, max_workers=5)
πŸ“¦ All installation options
pip install openhcs              # Headless (servers, CI)
pip install openhcs[gui]         # Desktop GUI
pip install openhcs[gui,napari]  # GUI + Napari viewer
pip install openhcs[gui,viz]     # GUI + Napari + Fiji
pip install openhcs[gui,viz,gpu] # Full installation
pip install openhcs[gpu]         # Headless + GPU
pip install openhcs[omero]       # OMERO integration
pip install -e ".[all,dev]"      # Development (all features)

GPU requires CUDA 12.x. For CPU-only: OPENHCS_CPU_ONLY=true pip install openhcs[gui]

πŸ—„οΈ OMERO integration

OMERO requires zeroc-ice (not on PyPI). The custom setup.py installs it automatically:

pip install 'openhcs[omero]'     # Auto-installs zeroc-ice

If that fails, alternatives:

python scripts/install_omero_deps.py   # Standalone script
pip install -r requirements-omero.txt  # Requirements file

Supported on Python 3.11 and 3.12. See Glencoe Software for manual installation.


πŸ“– Documentation

πŸ“˜ Read the Docs Full API docs, tutorials, guides
πŸ“Š Coverage Reports Test coverage analysis
πŸ—οΈ Architecture Pipeline system Β· GPU management Β· VFS Β· Config framework
πŸŽ“ Getting Started Installation Β· First pipeline

βš™οΈ Architecture Highlights

5-Phase Pipeline Compilation β€” catch errors before execution starts
Define β†’ Compile β†’ Freeze β†’ Execute
         β”œβ”€ 1. Path planning
         β”œβ”€ 2. ZARR store declaration
         β”œβ”€ 3. Materialization flags
         β”œβ”€ 4. Memory contract validation
         └─ 5. GPU assignment
              β†’ context.freeze()  # immutable

Pipelines are compiled for every well before any processing begins. Frozen contexts prevent state mutation during execution. Read more β†’

Dual-Axis Configuration β€” context hierarchy Γ— class MRO

Resolution walks two axes simultaneously: the context stack (Global β†’ Pipeline β†’ Step) and the class MRO (inheritance chain). Built on contextvars for thread-safe, scope-isolated resolution. Preserves None vs concrete value distinction for proper field-level inheritance. Powered by ObjectState. Read more β†’

Bidirectional GUI ↔ Code β€” code generation at any scope level

Any window holding ObjectState objects can generate and re-import executable Python:

Function patterns Β· Individual steps Β· Pipeline configs Β· Full orchestrator scripts
              ↕  generate / AST-parse back  ↕

Each scope encapsulates all lower-scope imports. Generated code is fully executable without additional setup. Edit in your IDE or external editor, save, and the GUI re-imports via AST parsing. Powered by pycodify + python-introspect. Read more β†’

Cross-Window Live Updates β€” class-level registry + Qt signals

A class-level registry tracks all active form managers. When a value changes in any config window, Qt signals propagate the change to every affected window with debounced, scope-isolated refreshes. Global β†’ Pipeline β†’ Step cascading with per-orchestrator isolation. Powered by PyQT-reactive. Read more β†’

More patterns β€” PolyStore streaming, function discovery, memory types
  • PolyStore Unified I/O: Storage backends (disk, memory, ZARR) and streaming backends (Napari, Fiji) behind one API β€” viewers are just backends. Virtual workspace path translation, atomic writes, ROI extraction.
  • Automatic Function Discovery: 574+ functions with contract analysis and type-safe integration via python-introspect + metaclass-registry
  • Memory Type Management: Compile-time validation of array type compatibility with zero-copy conversion via ArrayBridge
  • Custom Function Registration: Any Python function decorated with @numpy, @cupy, @pyclesperanto, etc. is auto-integrated with contracts, UI forms, and the function registry
  • Evolution-Proof UI: Type-based form generation from Python annotations β€” adapts automatically when signatures change

Full architecture docs β†’


🀝 Contributing

git clone https://github.com/OpenHCSDev/OpenHCS.git && cd OpenHCS
pip install -e ".[all,dev]"
pytest tests/

Contribution areas: microscope formats Β· processing functions Β· GPU backends Β· documentation


πŸ“„ License

MIT β€” see LICENSE.

πŸ™ Acknowledgments

OpenHCS evolved from EZStitcher and builds on Ashlar (stitching), MIST (phase correlation), pyclesperanto (GPU image processing), and scikit-image (image analysis).

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Packages

No packages published

Contributors 6

Languages