A native Geometric Algebra Transformer architecture based on Conformal Geometric Algebra
Start with the interactive tutorial:
jupyter notebook quickstart.ipynbThis notebook walks you through:
- Importing the Versor architecture
- Creating a simple dataset (learning x²)
- Training the model
- Testing and evaluation
The quickstart provides a minimal working example you can adapt to your own problems!
Versor/
├── quickstart.ipynb # 👈 START HERE! Interactive tutorial
├── Model/ # Core Versor architecture
│ ├── __init__.py
│ ├── core.py # Geometric algebra operations (Cl(4,1))
│ ├── layers.py # VersorLinear, VersorAttention
│ └── model.py # VersorTransformer, VersorBlock
├── tasks/ # Task-specific implementations
│ ├── nlp/ # Natural language processing tasks
│ ├── vision/ # Computer vision tasks
│ ├── nbody/ # N-body physics simulations
│ ├── topology/ # Topological reasoning tasks
│ ├── multimodal/ # Multimodal learning
│ ├── scripts/ # Analysis and benchmarking scripts
│ └── figures/ # Generated plots and visualizations
├── library/ # Utility functions and helpers
├── gatr/ # GATr baseline implementation
├── data/ # Datasets
├── results/ # Experimental results
├── requirements.txt # Python dependencies
└── kernel.py # Custom CUDA kernels
- Clone the repository:
git clone <repository-url>
cd Versor- Install dependencies:
pip install -r requirements.txt- (Optional) For CUDA acceleration:
# Ensure you have CUDA toolkit installed
# The custom kernels in kernel.py will be compiled automatically- VersorTransformer: Full geometric transformer for Cl(4,1)
- VersorBlock: High-performance block with Geometric Product Attention (GPA)
- VersorAttention: Attention mechanism using geometric products
- VersorLinear: Linear layer preserving multivector structure
- RecursiveRotorAccumulator: Rotor-based sequence pooling
conformal_lift: Lift 4D points to Cl(4,1) multivectorsgp_cl41: Geometric product in Cl(4,1)wedge_cl41: Wedge (exterior) productinner_cl41: Inner productreverse_cl41: Clifford conjugationnormalize_cl41: Manifold normalization
cd tasks/nlp
python dyck_rotor.py --depths 20 50 100 --repeats 3cd tasks/vision
# See task-specific README for detailscd tasks/nbody
# See task-specific README for detailsRun comprehensive benchmarks:
# Small-scale benchmark
python tasks/scripts/benchmark_versor_small.py
# Large-scale benchmark
python tasks/scripts/benchmark_versor_large.py
# Compare with GATr baseline
python tasks/scripts/benchmark_gatr.py
# Generate scaling plots
python tasks/scripts/generate_plot.pyThe quickstart.ipynb provides a template for:
- Regression tasks: Predict continuous values
- Classification tasks: Modify the output layer
- Sequence modeling: Use the rotor accumulator
- Custom data: Adapt the data loading section
Key steps:
- Prepare your data in the appropriate format
- Lift data to Cl(4,1) using
conformal_liftor custom embedding - Configure model hyperparameters (embed_dim, n_heads, n_layers)
- Train with standard PyTorch training loop
- Evaluate and visualize results
If you use this code in your research, please cite:
@article{Huy:2026wcd,
author = "Huy, Truong Minh and Hirst, Edward",
title = "{Versor: A Geometric Sequence Architecture}",
eprint = "2602.10195",
archivePrefix = "arXiv",
primaryClass = "cs.LG",
month = "2",
year = "2026"
}Contributions are welcome! Please:
- Fork the repository
- Create a feature branch
- Make your changes
- Submit a pull request
See LICENSE file for details.
Browse tasks/ for domain-specific implementations
Import errors: Ensure you're in the repository root and have installed all dependencies
CUDA errors: Check CUDA compatibility with your PyTorch version
Memory issues: Reduce batch size or model dimensions
Convergence issues: Try adjusting learning rate or using gradient clipping
For questions or issues, please open a GitHub issue or contact the authors.