syedhali / EZAudio
An iOS and macOS audio visualization framework built upon Core Audio useful for anyone doing real-time, low-latency audio processing and visualizations.
AI Architecture Analysis
This repository is indexed by RepoMind. By analyzing syedhali/EZAudio in our AI interface, you can instantly generate complete architecture diagrams, visualize control flows, and perform automated security audits across the entire codebase.
Our Agentic Context Augmented Generation (Agentic CAG) engine loads full source files into context on-demand, avoiding the fragmentation of traditional RAG systems. Ask questions about the architecture, dependencies, or specific features to see it in action.
Repository Overview (README excerpt)
Crawler viewA simple, intuitive audio framework for iOS and OSX. Deprecated EZAudio has recently been deprecated in favor of AudioKit. However, since some people are still forking and using EZAudio I've decided to restore the README as it was. Check out the note below. Apps Using EZAudio I'd really like to start creating a list of projects made using EZAudio. If you've used EZAudio to make something cool, whether it's an app or open source visualization or whatever, please email me at syedhali07[at]gmail.com and I'll add it to our wall of fame! To start it off: • Detour - Gorgeous location-aware audio walks • Jumpshare - Incredibly fast, real-time file sharing • Piano Tuner (Home Edition) • Piano Tuner (Pro Edition) • Music Pitch Detector • Piano Prober • Multi Tuner Features **Awesome Components** I've designed six audio components and two interface components to allow you to immediately get your hands dirty recording, playing, and visualizing audio data. These components simply plug into each other and build on top of the high-performance, low-latency AudioUnits API and give you an easy to use API written in Objective-C instead of pure C. EZAudioDevice A useful class for getting all the current and available inputs/output on any Apple device. The and use this to direct sound in/out from different hardware components. EZMicrophone A microphone class that provides its delegate audio data from the default device microphone with one line of code. EZOutput An output class that will playback any audio it is provided by its datasource. EZAudioFile An audio file class that reads/seeks through audio files and provides useful delegate callbacks. EZAudioPlayer A replacement for that combines an and a to perform robust playback of any file on any piece of hardware. EZRecorder A recorder class that provides a quick and easy way to write audio files from any datasource. EZAudioPlot A Core Graphics-based audio waveform plot capable of visualizing any float array as a buffer or rolling plot. EZAudioPlotGL An OpenGL-based, GPU-accelerated audio waveform plot capable of visualizing any float array as a buffer or rolling plot. **Cross Platform** was designed to work transparently across all iOS and OSX devices. This means one universal API whether you're building for Mac or iOS. For instance, under the hood an knows that it will subclass a UIView for iOS or an NSView for OSX and the knows to build on top of the RemoteIO AudioUnit for iOS, but defaults to the system defaults for input and output for OSX. Examples & Docs Within this repo you'll find the examples for iOS and OSX to get you up to speed using each component and plugging them into each other. With just a few lines of code you'll be recording from the microphone, generating audio waveforms, and playing audio files like a boss. See the full Getting Started guide for an interactive look into each of components. Example Projects **_EZAudioCoreGraphicsWaveformExample_** Shows how to use the and to visualize the audio data from the microphone in real-time. The waveform can be displayed as a buffer or a rolling waveform plot (traditional waveform look). **_EZAudioOpenGLWaveformExample_** Shows how to use the and to visualize the audio data from the microphone in real-time. The drawing is using OpenGL so the performance much better for plots needing a lot of points. **_EZAudioPlayFileExample_** Shows how to use the and to playback, pause, and seek through an audio file while displaying its waveform as a buffer or a rolling waveform plot. **_EZAudioRecordWaveformExample_** Shows how to use the , , and to record the audio from the microphone input to a file while displaying the audio waveform of the incoming data. You can then playback the newly recorded audio file using AVFoundation and keep adding more audio data to the tail of the file. **_EZAudioWaveformFromFileExample_** Shows how to use the and to animate in an audio waveform for an entire audio file. **_EZAudioPassThroughExample_** Shows how to use the , , and the to pass the microphone input to the output for playback while displaying the audio waveform (as a buffer or rolling plot) in real-time. **_EZAudioFFTExample_** Shows how to calculate the real-time FFT of the audio data coming from the and the Accelerate framework. The audio data is plotted using two for the time and frequency displays. Documentation The official documentation for EZAudio can be found here: http://cocoadocs.org/docsets/EZAudio/1.1.4/ You can also generate the docset yourself using appledocs by running the appledocs on the EZAudio source folder. Getting Started To begin using you must first make sure you have the proper build requirements and frameworks. Below you'll find explanations of each component and code snippets to show how to use each to perform common tasks like getting microphone data, updating audio waveform plots, reading/seeking through audio files, and performing playback. Build Requirements **iOS** • 6.0+ **OSX** • 10.8+ Frameworks **iOS** • Accelerate • AudioToolbox • AVFoundation • GLKit **OSX** • Accelerate • AudioToolbox • AudioUnit • CoreAudio • QuartzCore • OpenGL • GLKit Adding To Project You can add EZAudio to your project in a few ways: 1.) The easiest way to use EZAudio is via Cocoapods . Simply add EZAudio to your Podfile like so: Using EZAudio & The Amazing Audio Engine If you're also using the Amazing Audio Engine then use the subspec like so: 2.) EZAudio now supports Carthage (thanks Andrew and Tommaso!). You can refer to Carthage's installation for a how-to guide: https://github.com/Carthage/Carthage 3.) Alternatively, you can check out the iOS/Mac examples for how to setup a project using the EZAudio project as an embedded project and utilizing the frameworks. Be sure to set your header search path to the folder containing the EZAudio source. Core Components currently offers six audio components that encompass a wide range of functionality. In add…