veesix-networks / osvbng
Re-inventing the modern ISP BNG. Built on-top of VPP, FRR and DPDK.
View on GitHubAI Architecture Analysis
This repository is indexed by RepoMind. By analyzing veesix-networks/osvbng in our AI interface, you can instantly generate complete architecture diagrams, visualize control flows, and perform automated security audits across the entire codebase.
Our Agentic Context Augmented Generation (Agentic CAG) engine loads full source files into context on-demand, avoiding the fragmentation of traditional RAG systems. Ask questions about the architecture, dependencies, or specific features to see it in action.
Repository Overview (README excerpt)
Crawler viewosvbng (Open Source Virtual Broadband Network Gateway) is a high-performance, scalable, open source BNG for ISPs. Built to scale up to multi-hundred gigabit throughput on standard x86 COTS hardware. **Documentation** | **Discord Community** Key Features • 400+Gbps throughput with Intel DPDK (Up to 100+Gbps without DPDK) • 20,000+ Subscriber Sessions • Plugin-based architecture • IPoE (DHCPv4 + DHCPv6) and PPPoE • Modern monitoring stack • Core implementation is fully open source • Docker and KVM support Get Started QEMU / KVM with DPDK For maximum performance with DPDK and PCI passthrough. Requires a dedicated server with: • KVM/libvirt installed • IOMMU enabled (Intel VT-d or AMD-Vi) • At least 2 physical NICs for PCI passthrough (access and core) • 4GB+ RAM and 4+ CPU cores recommended **Quick Install:** **Install Dependencies (Debian/Ubuntu):** **Create Management Bridge:** The VM requires a management bridge for out-of-band access (SSH, monitoring, etc.). This bridge connects the VM's virtio management interface to your network. To make this persistent, add to : Or with Netplan ( ): **Manual Install:** The installer will: • Check prerequisites (IOMMU, vfio-pci, etc.) • Let you select NICs for PCI passthrough (access and core) • Download and deploy the osvbng VM image • Configure the VM with your selected interfaces **Manual Image Download (Optional):** If you prefer to download the QEMU image separately, you can download the qcow2 images manually from the Releases page or run the following: **Start and Connect:** **Access the CLI:** The VM auto-generates a default configuration on first boot. To customize, edit and restart the osvbng service. Tested Host Operating Systems • Ubuntu 22.04, 24.04 • Debian 12 (Bookworm) Docker **Prerequisites:** • Docker installed • Minimum of 2 physical network interfaces (access and core) if deploying in a non-test scenario **Step 1: Start the container** **Step 2: Attach network interfaces** For production with physical NICs (replace and with your interface names): For testing without physical hardware: !!! tip Container network interfaces must be recreated after each restart. The script creates veth pairs (virtual ethernet pairs) that connect your host's physical interfaces to the container. A veth pair acts like a virtual cable with two ends - one end stays on the host and is bridged to your physical NIC, while the other end is moved into the container's network namespace. Network namespaces are tied to process IDs, which are allocated by the kernel on container start and cannot be predicted. When a container restarts, it gets a new PID and new namespace, breaking the connection to the old veth pair. This is why the veth pairs must be recreated after every restart. For "production" deployments, use the systemd service (or equivalent) to automatically handle interface setup on container restart. **Step 3: Verify it's running** **Step 4: Access the CLI** "Production" Deployment For "production" deployments, you need to ensure the container and its network interfaces are automatically set up on system boot. Below is an example using systemd (adjust for your init system if using something else): Check service status: Customizing Configuration Generate and customize the config file: Mount it into the container: Or update the systemd service file to include the volume mount. Expectations What can you expect from the open source version of this project? Below are some key points we want to always achieve in every major release: • Minimum of 100Gbps out-of-the-box support • IPoE access technology with DHCPv4 support • Authenticate customers via DHCPv4 Option 82 (Sub-options 1 and 2, Circuit ID and/or Remote ID) • BGP, IS-IS and OSPF support • Only Default VRF implementation • No QoS/HQoS support from day 1 of the v1.0.0 release • Modern monitoring solution with Prometheus