back to home

stamparm / maltrail

Malicious traffic detection system

8,329 stars
1,255 forks
86 issues
PythonJavaScriptCSS

AI Architecture Analysis

This repository is indexed by RepoMind. By analyzing stamparm/maltrail in our AI interface, you can instantly generate complete architecture diagrams, visualize control flows, and perform automated security audits across the entire codebase.

Our Agentic Context Augmented Generation (Agentic CAG) engine loads full source files into context on-demand, avoiding the fragmentation of traditional RAG systems. Ask questions about the architecture, dependencies, or specific features to see it in action.

Source files are only loaded when you start an analysis to optimize performance.

Embed this Badge

Showcase RepoMind's analysis directly in your repository's README.

[![Analyzed by RepoMind](https://img.shields.io/badge/Analyzed%20by-RepoMind-4F46E5?style=for-the-badge)](https://repomind.in/repo/stamparm/maltrail)
Preview:Analyzed by RepoMind

Repository Overview (README excerpt)

Crawler view

Content • Introduction • Architecture • Demo pages • Requirements • Quick start • Administrator's guide • Sensor • Server • User's guide • Reporting interface • Real-life cases • Mass scans • Anonymous attackers • Service attackers • Malware • Suspicious domain lookups • Suspicious ipinfo requests • Suspicious direct file downloads • Suspicious HTTP requests • Port scanning • DNS resource exhaustion • Data leakage • False positives • Best practice(s) • License • Sponsors • Developers • Presentations • Publications • Blacklist • Thank you • Third-party integrations Introduction **Maltrail** is a malicious traffic detection system, utilizing publicly available (black)lists containing malicious and/or generally suspicious trails, along with static trails compiled from various AV reports and custom user defined lists, where a trail can be anything from a domain name (e.g. for Banjori malware), URL (e.g. for known malicious executable), IP address (e.g. for known attacker) or HTTP User-Agent header value (e.g. for automatic SQL injection and database takeover tool). Also, it uses (optional) advanced heuristic mechanisms that can help in the discovery of unknown threats (e.g. new malware). The following (black)lists (i.e. feeds) are being utilized: As for static entries, the trails for the following malicious entities (e.g. malware C&Cs or sinkholes) have been manually included (from various AV reports and personal research): Architecture Maltrail is based on the **Traffic** -> **Sensor** <-> **Server** <-> **Client** architecture. **Sensor**(s) is a standalone component running on the monitoring node (e.g. Linux platform connected passively to the SPAN/mirroring port or transparently inline on a Linux bridge) or at the standalone machine (e.g. Honeypot) where it "monitors" the passing **Traffic** for blacklisted items/trails (i.e. domain names, URLs and/or IPs). In case of a positive match, it sends the event details to the (central) **Server** where they are being stored inside the appropriate logging directory (i.e. described in the *Configuration* section). If **Sensor** is being run on the same machine as **Server** (default configuration), logs are stored directly into the local logging directory. Otherwise, they are being sent via UDP messages to the remote server (i.e. described in the *Configuration* section). **Server**'s primary role is to store the event details and provide back-end support for the reporting web application. In default configuration, server and sensor will run on the same machine. So, to prevent potential disruptions in sensor activities, the front-end reporting part is based on the "Fat client" architecture (i.e. all data post-processing is being done inside the client's web browser instance). Events (i.e. log entries) for the chosen (24h) period are transferred to the **Client**, where the reporting web application is solely responsible for the presentation part. Data is sent toward the client in compressed chunks, where they are processed sequentially. The final report is created in a highly condensed form, practically allowing presentation of virtually unlimited number of events. Note: **Server** component can be skipped altogether, and just use the standalone **Sensor**. In such case, all events would be stored in the local logging directory, while the log entries could be examined either manually or by some CSV reading application. Demo pages Fully functional demo pages with collected real-life threats can be found here. Requirements To run Maltrail properly, Python **2.6**, **2.7** or **3.x** is required on \*nix/BSD system, together with installed pcapy-ng package. **NOTE:** Please use . The older library is deprecated and causes issues in Python 3 environments. Examples. • **Sensor** component requires at least 1GB of RAM to run in single-process mode or more if run in multiprocessing mode, depending on the value used for option . Additionally, **Sensor** component (in the general case) requires administrative/root privileges. • **Server** component does not have any special requirements. Quick start The following set of commands should get your Maltrail **Sensor** up and running (out of the box with default settings and monitoring interface "any"): • For **Ubuntu/Debian** • For **SUSE/openSUSE** Don't forget to put interfaces in promiscuous mode as needed: To start the (optional) **Server** on same machine, open a new terminal and execute the following: • For **Docker** Currently only the server is available as a container image. Start the container with : If you need a fixed version, change the command to not start but for example ... or with : Don't edit the file directly, as this will be overwritten by . Instead, copy it to and edit that file; it is included in this repo's . To test that everything is up and running execute the following: Also, to test the capturing of DNS traffic you can try the following: To stop **Sensor** and **Server** instances (if running in background) execute the following: Access the reporting interface (i.e. **Client**) by visiting the http://127.0.0.1:8338 (default credentials: ) from your web browser: Administrator's guide Sensor Sensor's configuration can be found inside the file's section : If option is set to then all CPU cores will be used. One core will be used only for packet capture (with appropriate affinity, IO priority and nice level settings), while other cores will be used for packet processing. Otherwise, everything will be run on a single core. Option can be used to turn off the trail updates from feeds altogether (and just use the provided static ones). Option contains the number of seconds between each automatic trails update (Note: default value is set to (i.e. one day)) by using definitions inside the directory (Note: both **Sensor** and **Server** take care of the trails update). Option can be used by user to provide location of directory containing the custom…