Skip to content
View sparrowpanton's full-sized avatar
  • Emmanuel College, University of Toronto
  • Toronto
  • Joined Mar 26, 2026

Block or report sparrowpanton

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Maximum 250 characters. Please don’t include any personal information such as legal names or email addresses. Markdown is supported. This note will only be visible to you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
sparrowpanton/README.md

Hey, I'm Sparrow

I love AI. I love the disability community. I'm working on making them speak the same language.

I'm a practical theologian, a psychotherapist in training, and a researcher who fell hard into AI because I realized the models we're building don't understand the people I care about most. My work lives at the intersection of AI alignment, disability justice, and mental health — designing evaluations and interventions that teach language models to be clinically attuned, not just technically correct.

I'm neurodivergent and disabled. That's not a footnote — it's why this work exists. There are plenty of us in AI, but the training data doesn't speak our language yet. The medical model is baked into the defaults, and it flattens the perspectives of the very people building these systems. I'm working on changing that.

Current projects

Disability-Justice-LLM — Evaluating how open-weight LLMs handle mental health and disability conversations, and developing training interventions to improve their posture (not just their knowledge). Thirteen models, a custom rubric grounded in Mad Studies and disability justice, and a novel peer-teaching method called The Circle. All run on a 16GB Mac Mini, because meaningful AI research shouldn't require a GPU throne.

packing-day-willard — A contemplative web experience where you pack belongings for admission to Willard Asylum. Companion to my article in the International Journal of Practical Theology on palimpsestic theology and institutional memory.

What I actually do

  • Comparative LLM evaluation across open-weight models (1B–20B)
  • Custom rubric design for sensitive mental health and disability contexts
  • Qualitative error analysis of model failure modes (pathologizing, hallucination, crisis-script overreach)
  • Training intervention design grounded in clinical formation methodology
  • Cross-model benchmarking with reproducible prompts
  • Accessibility-first research workflows on consumer hardware

Background

  • Assistant Professor of Practical Theology, Emmanuel College, University of Toronto
  • Psychotherapist in training, Centre for Addiction and Mental Health (CAMH)
  • Author, Mad Practical Theology (SCM Press, 2026)
  • Editorial board, International Journal of Practical Theology
  • PhD, Toronto School of Theology

The thing I keep saying

Models pass the quiz but fail the clinical placement. They can define sanism but they can't sit with someone who's struggling. I'm training their posture, not just their knowledge — because the difference between information and formation is the difference between a textbook and a therapist.

If you're working on AI safety, alignment, or mental health applications and want to talk, I'd love to connect.

Pinned Loading

  1. Disability-Justice-LLM Disability-Justice-LLM Public

    A Disability Justice Approach to Fine-Tuning Language Models for Mental Health and Neurodiversity Contexts

    Python 1