back to home

astroautomata / SymbolicRegression.jl

Distributed High-Performance Symbolic Regression in Julia

View on GitHub
774 stars
125 forks
71 issues

AI Architecture Analysis

This repository is indexed by RepoMind. By analyzing astroautomata/SymbolicRegression.jl in our AI interface, you can instantly generate complete architecture diagrams, visualize control flows, and perform automated security audits across the entire codebase.

Our Agentic Context Augmented Generation (Agentic CAG) engine loads full source files into context on-demand, avoiding the fragmentation of traditional RAG systems. Ask questions about the architecture, dependencies, or specific features to see it in action.

Source files are only loaded when you start an analysis to optimize performance.

Embed this Badge

Showcase RepoMind's analysis directly in your repository's README.

[![Analyzed by RepoMind](https://img.shields.io/badge/Analyzed%20by-RepoMind-4F46E5?style=for-the-badge)](https://repomind.in/repo/astroautomata/SymbolicRegression.jl)
Preview:Analyzed by RepoMind

Repository Overview (README excerpt)

Crawler view

SymbolicRegression.jl searches for symbolic expressions which optimize a particular objective. https://github.com/MilesCranmer/SymbolicRegression.jl/assets/7593028/f5b68f1f-9830-497f-a197-6ae332c94ee0 Latest release Documentation Forums Paper Build status Coverage Check out PySR for a Python frontend. Cite this software **Contents**: • Quickstart • MLJ Interface • Low-Level Interface • Constructing expressions • Exporting to SymbolicUtils.jl • Contributors ✨ • Code structure • Search options Quickstart Install in Julia with: MLJ Interface The easiest way to use SymbolicRegression.jl is with MLJ. Let's see an example: Now, let's create and train this model on our data: You will notice that expressions are printed using the column names of our table. If, instead of a table-like object, a simple array is passed (e.g., ), will be used for variable names. Let's look at the expressions discovered: Finally, we can make predictions with the expressions on new data: This will make predictions using the expression selected by , which by default is a mix of accuracy and complexity. You can override this selection and select an equation from the Pareto front manually with: where here we choose to evaluate the second equation. For fitting multiple outputs, one can use (and pass an array of indices to in for selecting specific equations). For a full list of options available to each regressor, see the API page. Low-Level Interface The heart of SymbolicRegression.jl is the function. This takes a 2D array and attempts to model a 1D array using analytic functional forms. **Note:** unlike the MLJ interface, this assumes column-major input of shape [features, rows]. You can view the resultant equations in the dominating Pareto front (best expression seen at each complexity) with: This is a vector of type - which contains the expression along with the cost. We can get the expressions with: Each of these equations is an type for some constant type (like ). These expression objects are callable – you can simply pass in data: Constructing expressions Expressions are represented under-the-hood as the type which is developed in the DynamicExpressions.jl package. The type wraps this and includes metadata about operators and variable names. You can manipulate and construct expressions directly. For example: This tree has constants, so the type of the entire tree will be promoted to . We can convert all constants (recursively) to : We can then evaluate this tree on a dataset: This callable format is the easy-to-use version which will automatically set all values to NaN if there were any Inf or NaN during evaluation. You can call the raw evaluation method with : where explicitly declares whether the evaluation was successful. Exporting to SymbolicUtils.jl We can view the equations in the dominating Pareto frontier with: We can convert the best equation to SymbolicUtils.jl with the following function: We can also print out the full pareto frontier like so: Contributors ✨ We are eager to welcome new contributors! If you have an idea for a new feature, don't hesitate to share it on the issues page or forums. Mark Kittisopikul 💻 💡 🚇 📦 📣 👀 🔧 ⚠️ T Coxon 🐛 💻 🔌 💡 🚇 🚧 👀 🔧 ⚠️ 📓 Dhananjay Ashok 💻 🌍 💡 🚧 ⚠️ Johan Blåbäck 🐛 💻 💡 🚧 📣 👀 ⚠️ 📓 JuliusMartensen 🐛 💻 📖 🔌 💡 🚇 🚧 📦 📣 👀 🔧 📓 ngam 💻 🚇 📦 👀 🔧 ⚠️ Kaze Wong 🐛 💻 💡 🚇 🚧 📣 👀 🔬 📓 Christopher Rackauckas 🐛 💻 🔌 💡 🚇 📣 👀 🔬 🔧 ⚠️ 📓 Patrick Kidger 🐛 💻 📖 🔌 💡 🚧 📣 👀 🔬 🔧 ⚠️ 📓 Okon Samuel 🐛 💻 📖 🚧 💡 🚇 👀 ⚠️ 📓 William Booth-Clibborn 💻 🌍 📖 📓 🚧 👀 🔧 ⚠️ Pablo Lemos 🐛 💡 📣 👀 <a href="#research-Pablo-Lemos" title=" _...truncated for preview_