Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
294 changes: 92 additions & 202 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,202 +1,92 @@
# Fiber Photometry Processing GUI

<img width="500" height="500" alt="pyBer_logo_big" src="https://github.com/user-attachments/assets/e5acb000-17cd-451d-9f49-4218b41519aa" />


A desktop GUI for visualizing photometry recordings, cleaning artifacts, filtering/resampling, baseline estimation, motion-correction, and exporting processed traces for downstream analysis.

This project is designed for efficient exploratory QC (preview in the GUI) while keeping processing logic deterministic and scriptable (core functions live in `analysis_core.py`).

---

## Key Features

### Data IO
- Supports raw data in .doric, .h5 or .csv.
- Multi-channel support (e.g., analogue, DIO channels …).
- Optional alignment of analog traces to the DigitalIO timebase when a DIO is selected.

### Artifact Handling
- Artifact detection on raw 465 using derivative thresholding (`dx`) and MAD:
- **Global MAD (dx)**: one threshold for the full trace
- **Adaptive MAD (windowed)**: windowed thresholds for nonstationary noise
- Optional **padding** around detected artifacts to remove spillover.
- Manual artifact masking by user-defined time regions.
- Masked samples are replaced via **linear interpolation** to preserve time alignment.

### Signal Conditioning
- Low-pass filtering.
- Decimation/resampling.

### Baseline Estimation
Baseline is computed after filtering and resampling using **pybaselines**:
- `asls`, `arpls`, `airpls`
- tunable parameters (lambda, diff order, iterations, tolerance)

### Output Modes (7)
The GUI exposes seven explicit output definitions:

1. **dFF (non motion corrected)**
`dFF = (signal_filtered - signal_baseline) / signal_baseline`

2. **zscore (non motion corrected)**
`zscore(dFF_nonMC)`

3. **dFF (motion corrected via subtraction)**
`dFF_mc = dFF_signal - dFF_ref`
where each dFF uses its own baseline.

4. **zscore (motion corrected via subtraction)**
`zscore(dFF_signal - dFF_ref)`

5. **zscore (subtractions)**
`zscore(dFF_signal) - zscore(dFF_ref)`

6. **dFF (motion corrected with fitted ref)**
Fit the isosbestic/reference channel to the signal:
`fitted_ref = a * ref_filtered + b`
then compute:
`dFF = (signal_filtered - fitted_ref) / fitted_ref`

7. **zscore (motion corrected with fitted ref)**
`zscore( (signal_filtered - fitted_ref) / fitted_ref )`

### Reference Fitting Methods (for “fitted ref” modes)
- **OLS (recommended)**: fast and stable
- **Lasso**: sparse regression (requires `scikit-learn`)
- **RLM (HuberT)**: robust linear model via IRLS + Huber weighting (no extra dependency)

### Export
- Export processed output to:
- CSV with configurable fields (`time` always included; raw/isobestic/output/DIO selectable)
- HDF5 with configurable raw/output/DIO/baseline datasets plus metadata
- Export field selection is saved and restored through the preprocessing configuration file.
- Drag-and-drop support for preprocessing and post-processing files.

---

## Repository Structure (typical)

- `analysis_core.py`
Processing pipeline (loading, filtering, baselines, outputs, export helpers)
- `main.py` (or similar)
PySide6 GUI entry point and UI wiring
- `requirements.yml`
Conda environment definition

---

## Installation

1. Create the environment:
```bash
conda env create -f environment.yml
## Run
cd .\pyBer\
python main.py

## Usage workflow

Open a Doric .h5 file

Choose a channel (e.g., AIN01).

Optionally select a DigitalIO line to overlay events.

### QC & artifact removal

Choose Global MAD (dx) or Adaptive MAD (windowed).

Tune mad_k, window size, and padding.

Add manual mask regions if needed.

### Filtering & resampling

Set low-pass cutoff (Hz) and filter order.

Set a target sampling rate (Hz) for consistent downstream analysis.

### Baseline estimation

Choose asls, arpls, or airpls.

Tune lambda and other parameters to avoid baseline leakage into fast transients.

### Select output

Pick one of the 7 output modes.

For “fitted ref” modes, choose the fit method (OLS/Lasso/RLM-HuberT).


### Export

Export CSV/H5 for analysis in Python/MATLAB/R.

---

## Preprocessing: Advanced Options

The preprocessing panel includes an **Advanced options** button with two features:

1) **Cut out regions (NaN)**
Define start/end ranges to exclude parts of the trace from downstream analysis. Cutout regions are filled with NaN in the output and can be exported as-is.

2) **Sections (per-section processing)**
Define multiple start/end sections and **assign per-section processing parameters**. Each section can be exported independently (one CSV/H5 per section).

### Time window
You can optionally set a start time, end time, or both:
- Start only: process from start → end of recording
- End only: process from 0 → end
- Start + end: process that window only
- Empty: process full trace

---

## Post-Processing

### Align sources
- **DIO**: choose DIO channel, polarity (0→1 or 1→0), and align to onset/offset
- **Behavior (CSV/XLSX)**: load a behavior file and select a column (binary 0/1)
- Align to onset or offset
- **Transitions**: align to A→B transitions with a max gap threshold

### Behavior file formats
- **CSV**: must include a time column and one or more binary behavior columns
- **Ethovision XLSX**: the loader preprocess and clean the file, the user has to select the sheet when loading

### PSTH/Heatmap
- Heatmap and PSTH refresh automatically when alignment settings change
- Event lines are overlaid on the trace preview
- Event duration histogram appears to the right of the heatmap
- Metrics bar plot (pre vs post) appears to the right of the PSTH

### Metrics
Choose AUC or mean z-score and define pre/post windows in seconds.

### Export results
Export any combination of:
- Heatmap matrix
- Average PSTH + SEM
- Event times
- Event durations
- Metrics table

---

## Group Mode (Multiple Animals)

Use the **Group** tab in Post-Processing to load multiple processed files (CSV/H5). Each file should represent a single animal, and matching behavior files should share the same base name. In group mode:
- Heatmap rows represent animals (not trials)
- PSTH is averaged across animals

---







# pyBer

pyBer is a desktop application for fiber photometry analysis. It helps you load
Doric, HDF5, or CSV recordings, clean artifacts, preprocess traces, align signals
to behavior or DIO events, inspect PSTHs and heatmaps, detect transients, and
export results for Python, MATLAB, R, or Prism.

The app is built for users who want an interactive workflow first, with
deterministic processing code underneath.

![pyBer logo](https://github.com/user-attachments/assets/e5acb000-17cd-451d-9f49-4218b41519aa)

## Quick Install

Install Miniforge or Anaconda first, then run:

```powershell
cd C:\Analysis\app_project\pyBer
conda env create -f environment.yml
conda activate pyBer
Rscript -e "install.packages('fastFMM', repos='https://cloud.r-project.org')"
python .\pyBer\main.py
```

The `fastFMM` step is only needed for the FLMM temporal modeling panel. The rest
of pyBer works without it.

## Launch From VS Code

1. Open the repository folder in VS Code.
2. Select the interpreter from the `pyBer` conda environment.
3. Open `pyBer/main.py`.
4. Press Run, or use:

```powershell
conda activate pyBer
python .\pyBer\main.py
```

If VS Code launches the wrong Python, run `Python: Select Interpreter` and choose
the environment created from `environment.yml`.

## What You Can Do

- Preprocess raw photometry traces with filtering, resampling, baseline
correction, motion correction, and artifact handling.
- Detect and inspect artifacts with interpolation, cutout, local low-pass
filtering, or no-op handling.
- Export processed CSV or HDF5 files with selectable fields and metadata.
- Align processed signals to DIO, behavior states, behavior onsets, or behavior
transitions.
- Detect signal events and compare transient amplitude with baseline-prominence
normalized metrics.
- Build individual or group PSTHs, heatmaps, event duration plots, and metrics.
- Fit temporal models with continuous GLM or trial-level FLMM.
- Rank GLM/FLMM feature contribution with leave-one-feature-out summaries.

## Documentation

The full user guide is here:

- [pyBer Documentation](docs/index.md)

It includes installation, first launch, preprocessing, postprocessing, transient
detection, temporal modeling, group workflows, export, and troubleshooting.

## Repository Layout

- `pyBer/main.py`: application entry point.
- `pyBer/analysis_core.py`: preprocessing and signal processing backend.
- `pyBer/gui_preprocessing.py`: preprocessing panels.
- `pyBer/gui_postprocessing.py`: postprocessing, PSTH, metrics, and export panels.
- `pyBer/temporal_modeling.py`: GLM and FLMM modeling panel.
- `environment.yml`: conda environment for development and user installs.
- `pyBer.spec`: PyInstaller build configuration.

## Build The Executable

From an activated environment:

```powershell
conda activate pyBer
python -m PyInstaller --noconfirm --clean pyBer.spec
```

The executable is written to `dist/pyBer.exe`.

## Notes

pyBer sets `PYTHONNOUSERSITE=1` in the environment so old packages from the user
Python folder do not interfere with the conda environment. This is important for
Qt, pyqtgraph, numpy, and rpy2 stability on Windows.
Loading
Loading