back to home

D-ST-Sword / mlx-snn

Spiking Neural Network library built natively on Apple MLX

603 stars
1 forks
0 issues
Python

AI Architecture Analysis

This repository is indexed by RepoMind. By analyzing D-ST-Sword/mlx-snn in our AI interface, you can instantly generate complete architecture diagrams, visualize control flows, and perform automated security audits across the entire codebase.

Our Agentic Context Augmented Generation (Agentic CAG) engine loads full source files into context on-demand, avoiding the fragmentation of traditional RAG systems. Ask questions about the architecture, dependencies, or specific features to see it in action.

Source files are only loaded when you start an analysis to optimize performance.

Embed this Badge

Showcase RepoMind's analysis directly in your repository's README.

[![Analyzed by RepoMind](https://img.shields.io/badge/Analyzed%20by-RepoMind-4F46E5?style=for-the-badge)](https://repomind.in/repo/D-ST-Sword/mlx-snn)
Preview:Analyzed by RepoMind

Repository Overview (README excerpt)

Crawler view

mlx-snn **A general-purpose Spiking Neural Network library built on Apple MLX.** mlx-snn aims to provide an efficient, research-friendly SNN framework that leverages MLX's unified memory architecture and lazy evaluation. Whether you're exploring neuron dynamics, training classifiers with surrogate gradients, or exchanging models via NIR, mlx-snn offers a clean, Pythonic API that integrates naturally into the MLX ecosystem. Why mlx-snn? • **MLX-native** — All operations use . No PyTorch/CUDA dependency. Runs on Apple Silicon with zero-copy unified memory. • **Research-friendly** — Explicit state dicts, composable surrogate gradients, and standard patterns make it easy to experiment and extend. • **Cross-framework** — NIR support lets you import and export models to/from snnTorch, Norse, SpikingJelly, and neuromorphic hardware platforms. • **Hardware tested** — Currently validated on Apple M3 Max. Future Apple Silicon releases will be tested and supported as they become available. Installation Requires Python 3.9+ and Apple Silicon (M1/M2/M3/M4). Quick Start Features Neuron Models | Model | Since | Description | |-------|-------|-------------| | **Leaky (LIF)** | v0.1 | Leaky Integrate-and-Fire with configurable decay | | **IF** | v0.1 | Integrate-and-Fire (non-leaky) | | **Izhikevich** | v0.2 | 2D dynamics with RS/IB/CH/FS presets | | **Adaptive LIF** | v0.2 | LIF with adaptive threshold | | **Synaptic** | v0.2 | Conductance-based dual-state LIF | | **Alpha** | v0.2 | Dual-exponential synaptic model | | **RLeaky** | v0.4 | Recurrent LIF with learnable feedback weight | | **RSynaptic** | v0.4 | Recurrent Synaptic with learnable feedback weight | Surrogate Gradients All neuron models support differentiable training via surrogate gradients: • **Fast Sigmoid** — default, good balance of speed and accuracy • **Arctan** — smoother gradient landscape • **Sigmoid** — standard logistic sigmoid derivative • **Triangular (Tent)** — localized, compact support near threshold • **Straight-Through Estimator** — simplest, unit gradient everywhere • **Custom** — plug in any smooth approximation Spike Encoding | Method | Since | Use Case | |--------|-------|----------| | **Rate (Poisson)** | v0.1 | Static images, general-purpose | | **Latency (TTFS)** | v0.1 | Energy-efficient, temporal coding | | **Delta Modulation** | v0.2 | Temporal signals, change detection | | **EEG Encoder** | v0.2 | EEG-to-spike with frequency band support | Training & Loss Functions • BPTT forward pass helper ( ) • Loss functions: , , , , • Learnable parameters: , , on all neurons • Works with standard MLX optimizers ( , etc.) NIR Interoperability NIR (Neuromorphic Intermediate Representation) enables cross-framework SNN model exchange between simulators and neuromorphic hardware platforms. **Export** an mlx-snn model to NIR: **Import** a NIR model into mlx-snn: Supported conversions: / , , , . Benchmark Highlights Experiments on MNIST (784-128-10 SNN, 25 timesteps, 5 seeds) on Apple M3 Max, compared with snnTorch on NVIDIA V100: | Configuration | mlx-snn (M3 Max) | snnTorch (V100) | Speed (mlx-snn) | Speed (snnTorch) | |---------------|-------------------|-----------------|------------------|------------------| | Leaky (LIF) | 96.3% | 97.3% | **5.7 s/epoch** | 20.9 s/epoch | | Synaptic | 94.4% | 95.8% | 6.1 s/epoch | 25.2 s/epoch | | RLeaky (V=0.1, learn) | 91.6% | 68.1% | 6.8 s/epoch | 25.7 s/epoch | | RSynaptic (V=0.1, learn) | 89.0% | 52.2% | 7.3 s/epoch | 29.2 s/epoch | | Fast Sigmoid surrogate | 96.3% | 96.7% | 5.7 s/epoch | 20.9 s/epoch | | Triangular (Tent) surrogate | 86.0% | 50.8% | 10.9 s/epoch | 20.9 s/epoch | mlx-snn achieves ~3.7-4.1x faster training per epoch on the M3 Max compared to the V100, while maintaining competitive accuracy. Recurrent neurons with learnable weights significantly outperform snnTorch's default configurations. For full results, see our benchmarking paper and the experiments/ directory. Migrating from snnTorch mlx-snn is designed to feel familiar to snnTorch users: Key differences: • **State is a dict**, not separate tensors — plays well with MLX functional transforms • **No global hidden state** — state is always explicit (pass in, get out) • **MLX arrays** instead of PyTorch tensors — use , not • **Surrogate gradients** use the STE pattern with Project Structure Roadmap • [x] **v0.1** — Core neurons (LIF, IF), surrogate gradients, rate/latency encoding • [x] **v0.2** — Extended neurons (Izhikevich, ALIF, Synaptic, Alpha), EEG encoder, delta encoding • [x] **v0.3** — NIR interoperability (export/import) • [x] **v0.4** — Recurrent neurons, conv/pooling layers, neuromorphic datasets, TAC operators • [x] **v0.5** — Direct/repeat encoding, activity regularization, SpikeDropout, visualization, SHD dataset • [x] **v0.6** — CI/CD, API documentation site, complete examples • [x] **v0.7** — Liquid State Machine, reservoir topology, optimization • [ ] **v1.0** — Full documentation, comprehensive benchmarks, JOSS paper Publications • **mlx-snn v0.1**: Spiking Neural Networks on Apple Silicon via MLX (arXiv, 2026) • **mlx-snn v0.4**: Spiking Neural Network Training on Apple Silicon: Cross-Framework Benchmarking (in preparation) Citation If you use mlx-snn in your research, please cite: Contributing Contributions are welcome! Please open an issue or pull request on GitHub. License GPL-3.0