back to home

Realiserad / fish-ai

Supercharge your command line with LLMs and get shell scripting assistance in Fish. 💪

View on GitHub
503 stars
42 forks
3 issues
PythonShellDockerfile

AI Architecture Analysis

This repository is indexed by RepoMind. By analyzing Realiserad/fish-ai in our AI interface, you can instantly generate complete architecture diagrams, visualize control flows, and perform automated security audits across the entire codebase.

Our Agentic Context Augmented Generation (Agentic CAG) engine loads full source files into context on-demand, avoiding the fragmentation of traditional RAG systems. Ask questions about the architecture, dependencies, or specific features to see it in action.

Source files are only loaded when you start an analysis to optimize performance.

Embed this Badge

Showcase RepoMind's analysis directly in your repository's README.

[![Analyzed by RepoMind](https://img.shields.io/badge/Analyzed%20by-RepoMind-4F46E5?style=for-the-badge)](https://repomind.in/repo/Realiserad/fish-ai)
Preview:Analyzed by RepoMind

Repository Overview (README excerpt)

Crawler view

About adds AI functionality to Fish. It's awesome! I built it to make my life easier, and I hope it will make yours easier too. Here is the complete sales pitch: • It can turn a comment into a shell command and vice versa, which means less time spent reading manpages, googling and copy-pasting from Stack Overflow. Great when working with , , and other tools with loads of parameters and switches. • Did you make a typo? It can also fix a broken command (similarly to ). • Not sure what to type next or just lazy? Let the LLM autocomplete your commands with a built in fuzzy finder. • Everything is done using two (configurable) keyboard shortcuts, no mouse needed! • It can be hooked up to the LLM of your choice (even a self-hosted one!). • The whole thing is open source, hopefully somewhat easy to read and around 2000 lines of code, which means that you can audit the code yourself in an afternoon. • Install and update with ease using . • Tested on both macOS and the most common Linux distributions. • Does not interfere with , or any of the other plugins you're already using! • Does not wrap your shell, install telemetry or force you to switch to a proprietary terminal emulator. This plugin was originally based on Tom Dörr's repository. Without Tom, this repository would not exist! If you like it, please add a ⭐. Bug fixes are welcome! I consider this project largely feature complete. Before opening a PR for a feature request, consider opening an issue where you explain what you want to add and why, and we can talk about it first. 🎥 Demo 👨‍🔧 How to install Install Make sure and either , or a supported version of Python along with and is installed. Then grab the plugin using : Create a configuration Create a configuration file (use if is not set) where you specify which LLM should talk to. If you're not sure, use GitHub Models. Anthropic To use Anthropic: Azure OpenAI To use Azure OpenAI: Cohere To use Cohere: DeepSeek To use DeepSeek: GitHub Models To use GitHub Models: You can create a personal access token (PAT) here. The PAT does not require any permissions. Google To use Gemini from Google: Groq To use Groq: Mistral To use Mistral: OpenAI To use OpenAI: OpenRouter To use OpenRouter: Self-hosted To use a self-hosted LLM (behind an OpenAI-compatible API): If you are self-hosting, my recommendation is to use Ollama with Llama 3.3 70B. An out of the box configuration running on could then look something like this: Available models are listed here. Put the API key on your keyring Instead of putting the API key in the configuration file, you can let load it from your keyring. To save a new API key or transfer an existing API key to your keyring, run . 🙉 How to use Transform comments into commands and vice versa Type a comment (anything starting with ), and press **Ctrl + P** to turn it into shell command! Note that if your comment is very brief or vague, the LLM may decide to improve the comment instead of providing a shell command. You then need to press **Ctrl + P** again. You can also run it in reverse. Type a command and press **Ctrl + P** to turn it into a comment explaining what the command does. Autocomplete commands Begin typing your command or comment and press **Ctrl + Space** to display a list of completions in (it is bundled with the plugin, no need to install it separately). To refine the results, type some instructions and press **Ctrl + P** inside . Suggest fixes If a command fails, you can immediately press **Ctrl + Space** at the command prompt to let suggest a fix! 🤸 Additional options You can tweak the behaviour of by putting additional options in your configuration file. Change the default key bindings By default, binds to **Ctrl + P** and **Ctrl + Space**. You may want to change this if there is interference with any existing key bindings on your system. To change the key bindings, set (defaults to **Ctrl + P**) and (defaults to **Ctrl + Space**) to the key binding escape sequence of the key binding you want to use. To get the correct key binding escape sequence, use . For example, if you have the following output from : Then put the following in your configuration file: Restart the shell for the changes to take effect. Explain in a different language To explain shell commands in a different language, set the option to the name of the language. For example: This will only work well if the LLM you are using has been trained on a dataset with the chosen language. Number of completions To change the number of completions suggested by the LLM when pressing **Ctrl + Space**, set the option. The default value is . Here is an example of how you can increase the number of completions to : To change the number of refined completions suggested by the LLM when pressing **Ctrl + P** in , set the option. The default value is . Personalise completions using commandline history You can personalise completions suggested by the LLM by sending an excerpt of your commandline history. To enable it, specify the maximum number of commands from the history to send to the LLM using the option. The default value is (do not send any commandline history). If you enable this option, consider the use of to automatically remove broken commands from your commandline history. Preview pipes To send the output of a pipe to the LLM when completing a command, use the option. This will send the output of the longest consecutive pipe after the last unterminated parenthesis before the cursor. For example, if you autocomplete , the output from will be sent to the LLM. This behaviour is disabled by default, as it may slow down the completion process and lead to commands being executed twice. Configure the progress indicator You can change the progress indicator (the default is ⏳) shown when the plugin is waiting for a response from the LLM. To change the default, set the option to zero or more characters. Use custom headers You…