back to home

BradLarson / GPUImage2

GPUImage 2 is a BSD-licensed Swift framework for GPU-accelerated video and image processing.

4,935 stars
620 forks
206 issues
SwiftC++GLSL

AI Architecture Analysis

This repository is indexed by RepoMind. By analyzing BradLarson/GPUImage2 in our AI interface, you can instantly generate complete architecture diagrams, visualize control flows, and perform automated security audits across the entire codebase.

Our Agentic Context Augmented Generation (Agentic CAG) engine loads full source files into context on-demand, avoiding the fragmentation of traditional RAG systems. Ask questions about the architecture, dependencies, or specific features to see it in action.

Source files are only loaded when you start an analysis to optimize performance.

Embed this Badge

Showcase RepoMind's analysis directly in your repository's README.

[![Analyzed by RepoMind](https://img.shields.io/badge/Analyzed%20by-RepoMind-4F46E5?style=for-the-badge)](https://repomind.in/repo/BradLarson/GPUImage2)
Preview:Analyzed by RepoMind

Repository Overview (README excerpt)

Crawler view

GPUImage 2 # Brad Larson http://www.sunsetlakesoftware.com @bradlarson contact@sunsetlakesoftware.com Overview ## GPUImage 2 is the second generation of the GPUImage framework , an open source project for performing GPU-accelerated image and video processing on Mac, iOS, and now Linux. The original GPUImage framework was written in Objective-C and targeted Mac and iOS, but this latest version is written entirely in Swift and can also target Linux and future platforms that support Swift code. The objective of the framework is to make it as easy as possible to set up and perform realtime video processing or machine vision against image or video sources. By relying on the GPU to run these operations, performance improvements of 100X or more over CPU-bound code can be realized. This is particularly noticeable in mobile or embedded devices. On an iPhone 4S, this framework can easily process 1080p video at over 60 FPS. On a Raspberry Pi 3, it can perform Sobel edge detection on live 720p video at over 20 FPS. License ## BSD-style, with the full license available with the framework in License.txt. Currently, GPUImage uses Lode Vandevenne's LodePNG for PNG output on Linux, as well as Paul Hudson's SwiftGD for image loading. lodepng is released under the zlib license, and SwiftGD is released under the MIT License. Technical requirements ## • Swift 3 • Xcode 8.0 on Mac or iOS • iOS: 8.0 or higher (Swift is supported on 7.0, but not Mac-style frameworks) • OSX: 10.9 or higher • Linux: Wherever Swift code can be compiled. Currently, that's Ubuntu 14.04 or higher, along with the many other places it has been ported to. I've gotten this running on the latest Raspbian, for example. For camera input, Video4Linux needs to be installed. General architecture ## The framework relies on the concept of a processing pipeline, where image sources are targeted at image consumers, and so on down the line until images are output to the screen, to image files, to raw data, or to recorded movies. Cameras, movies, still images, and raw data can be inputs into this pipeline. Arbitrarily complex processing operations can be built from a combination of a series of smaller operations. This is an object-oriented framework, with classes that encapsulate inputs, processing operations, and outputs. The processing operations use Open GL (ES) vertex and fragment shaders to perform their image manipulations on the GPU. Examples for usage of the framework in common applications are shown below. Using GPUImage in an Mac or iOS application ## To add the GPUImage framework to your Mac or iOS application, either drag the GPUImage.xcodeproj project into your application's project or add it via File | Add Files To... After that, go to your project's Build Phases and add GPUImage_iOS or GPUImage_macOS as a Target Dependency. Add it to the Link Binary With Libraries phase. Add a new Copy Files build phase, set its destination to Frameworks, and add the upper GPUImage.framework (for Mac) or lower GPUImage.framework (for iOS) to that. That last step will make sure the framework is deployed in your application bundle. In any of your Swift files that reference GPUImage classes, simply add and you should be ready to go. Note that you may need to build your project once to parse and build the GPUImage framework in order for Xcode to stop warning you about the framework and its classes being missing. Using GPUImage in a Linux application ## This project supports the Swift Package Manager, so you should be able to add it as a dependency in your Package.swift file like the following: along with an in your application code. Before compiling the framework, you'll need to get Swift up and running on your system. For desktop Ubuntu installs, you can follow Apple's guidelines on their Downloads page . After Swift, you'll need to install Video4Linux to get access to standard USB webcams as inputs: On the Raspberry Pi, you'll need to make sure that the Broadcom Videocore headers and libraries are installed for GPU access: For desktop Linux and other OpenGL devices (Jetson family), you'll need to make sure GLUT and the OpenGL headers are installed. The framework currently uses GLUT for its output. GLUT can be used on the Raspberry Pi via the new experimental OpenGL support there, but I've found that it's significantly slower than using the OpenGL ES APIs and the Videocore interface that ships with the Pi. Also, if you enable the OpenGL support you currently lock yourself out of using the Videocore interface. Once all of that is set up, you can use in the main GPUImage directory to build the framework, or do the same in the examples/Linux-OpenGL/SimpleVideoFilter directory. This will build a sample application that filters live video from a USB camera and displays the results in real time to the screen. The application itself will be contained within the .build directory and its platform-specific subdirectories. Look for the SimpleVideoFilter binary and run that. Performing common tasks ## Filtering live video ### To filter live video from a Mac or iOS camera, you can write code like the following: where renderView is an instance of RenderView that you've placed somewhere in your view hierarchy. The above instantiates a 640x480 camera instance, creates a saturation filter, and directs camera frames to be processed through the saturation filter on their way to the screen. startCapture() initiates the camera capture process. The --> operator chains an image source to an image consumer, and many of these can be chained in the same line. Capturing and filtering a still photo ### Functionality not completed. Capturing an image from video ### (Not currently available on Linux.) To capture a still image from live video, you need to set a callback to be performed on the next frame of video that is processed. The easiest way to do this is to use the convenience extension to capture, encode, and save a file to disk: Under…