back to home

mofa-org / mofa

MoFA - Modular Framework for Agents. Modular, Compositional and Programmable.

202 stars
138 forks
464 issues
RustShellPython

AI Architecture Analysis

This repository is indexed by RepoMind. By analyzing mofa-org/mofa in our AI interface, you can instantly generate complete architecture diagrams, visualize control flows, and perform automated security audits across the entire codebase.

Our Agentic Context Augmented Generation (Agentic CAG) engine loads full source files into context on-demand, avoiding the fragmentation of traditional RAG systems. Ask questions about the architecture, dependencies, or specific features to see it in action.

Source files are only loaded when you start an analysis to optimize performance.

Embed this Badge

Showcase RepoMind's analysis directly in your repository's README.

[![Analyzed by RepoMind](https://img.shields.io/badge/Analyzed%20by-RepoMind-4F46E5?style=for-the-badge)](https://repomind.in/repo/mofa-org/mofa)
Preview:Analyzed by RepoMind

Repository Overview (README excerpt)

Crawler view

MoFA Agent Framework English | 简体中文 Website | Quick Start | GitHub | Hackathon | Community 📋 Table of Contents • Overview • Why MoFA? • Core Architecture • Core Features • Quick Start • Roadmap • Ecosystem & Related Repos • Documentation • Security • Contributing • Community • License Overview MoFA (Modular Framework for Agents) is not just another entry in the crowded agent framework landscape. It is the first production-grade framework to achieve **"write once, run everywhere"** across languages, built for **extreme performance, boundless extensibility, and runtime programmability**. Through its revolutionary microkernel architecture and innovative **dual-layer plugin system** (compile-time + runtime), MoFA strikes the elusive balance between raw performance and dynamic flexibility. What Sets MoFA Apart: ✅ **Rust Core + UniFFI**: Blazing performance with native multi-language interoperability ✅ **Dual-Layer Plugins**: Zero-cost compile-time extensions meet hot-swappable runtime scripts ✅ **Microkernel Architecture**: Clean separation of concerns, effortless to extend ✅ **Cloud-Native by Design**: First-class support for distributed and edge deployments Why MoFA? **Performance** • Zero-cost abstractions in Rust • Memory safety without garbage collection • Orders of magnitude faster than Python-based frameworks **Polyglot by Design** • Auto-generated bindings for Python, Java, Go, Kotlin, Swift via UniFFI • Call Rust core logic natively from any supported language • Near-zero overhead compared to traditional FFI **Runtime Programmability** • Embedded Rhai scripting engine • Hot-reload business logic without recompilation • Runtime configuration and rule adjustments • User-defined extensions on the fly **Dual-Layer Plugin Architecture** • **Compile-time plugins**: Extreme performance, native integration • **Runtime plugins**: Dynamic loading, instant effect • Support plugin hot loading and version management **Distributed by Nature** • Built on Dora-rs for distributed dataflow • Seamless cross-process, cross-machine agent communication • Edge computing ready **Actor-Model Concurrency** • Isolated agent processes via Ractor • Message-passing architecture • Battle-tested for high-concurrency workloads Core Architecture Microkernel + Dual-Layer Plugin System MoFA adopts a **layered microkernel architecture**, achieving extreme extensibility through a **dual-layer plugin system**: Advantages of Dual-Layer Plugin System **Compile-time Plugins (Rust/WASM)** • Extreme performance, zero runtime overhead • Type safety, compile-time error checking • Support complex system calls and native integration • WASM sandbox provides secure isolation **Runtime Plugins (Rhai Scripts)** • No recompilation needed, instant effect • Business logic hot updates • User-defined extensions • Secure sandbox execution with configurable resource limits **Combined Power** • Use Rust plugins for performance-critical paths (e.g., LLM inference, data processing) • Use Rhai scripts for business logic (e.g., rule engines, workflow orchestration) • Seamless interoperability between both, covering 99% of extension scenarios Core Features • Microkernel Architecture MoFA adopts a **layered microkernel architecture** with at its core. All other features (including plugin system, LLM capabilities, multi-agent collaboration, etc.) are built as modular components on top of the microkernel. Core Design Principles • **Core Simplicity**: The microkernel contains only the most basic functions: agent lifecycle management, metadata system, and dynamic management • **High Extensibility**: All advanced features are extended through modular components and plugins, keeping the kernel stable • **Loose Coupling**: Components communicate through standardized interfaces, easy to replace and upgrade Integration with Plugin System • The plugin system is developed based on the interface of the microkernel. All plugins (including LLM plugins, tool plugins, etc.) are integrated through the standard interface • The microkernel provides plugin registration center and lifecycle management, supporting plugin hot loading and version control • LLM capabilities are implemented through , encapsulating LLM providers as plugins compliant with microkernel specifications Integration with LLM • LLM exists as a plugin component of the microkernel, providing standard LLM access capabilities through the interface • All agent collaboration patterns (chain, parallel, debate, etc.) are built on the microkernel's workflow engine and interact with LLMs through standardized LLM plugin interfaces • Secretary mode is also implemented based on the microkernel's A2A communication protocol and task scheduling system • Dual-Layer Plugins • **Compile-time plugins**: Extreme performance, native integration • **Runtime plugins**: Dynamic loading, instant effect • Seamless collaboration between both, covering all scenarios • Agent Coordination • **Priority Scheduling**: Task scheduling system based on priority levels • **Communication Bus**: Built-in inter-agent communication bus • **Workflow Engine**: Visual workflow builder and executor • LLM and AI Capabilities • **LLM Abstraction Layer**: Standardized LLM integration interface • **OpenAI Support**: Built-in OpenAI API integration • **ReAct Pattern**: Agent framework based on reasoning and action • **Multi-Agent Collaboration**: LLM-driven agent coordination, supporting multiple collaboration modes: • **Request-Response**: One-to-one deterministic tasks with synchronous replies • **Publish-Subscribe**: One-to-many broadcast tasks with multiple receivers • **Consensus**: Multi-round negotiation and voting for decision-making • **Debate**: Agents alternate speaking to iteratively refine results • **Parallel**: Simultaneous execution with automatic result aggregation • **Sequential**: Pipeline execution where output flows to the next agent • **Custom**: User-defined modes interpreted by the LLM • **Sec…