Sound. — Minimal iOS auditory synthesis app (MVP)
Rationale: This repo ships a tiny, auditable iOS MVP using only system frameworks (SwiftUI, AVAudioEngine). We bias for fewer parts and render-thread safety by keeping the render block allocation-free and using simple parameter mirroring. Non-critical features (e.g., pink noise) are deferred if they add weight.
- iOS 16+, Swift 5.9+
- Modes: Tone, Binaural, Isochronic, AM, White noise
- Offline synthesis via AVAudioSourceNode
- Presets load/save as JSON
- Background playback with mixWithOthers
- Xcode 15.x on macOS.
- Python 3.12 for research pipeline.
- Optional:
python3 -m venv .venv && source .venv/bin/activate && pip install -r research/requirements.txt
- Open the Xcode project at
sound/sound.xcodeproj. - Ensure Background Modes → Audio is enabled,
PrivacyInfo.xcprivacyis included, and deployment target is iOS 16+. - Build & Run on device/simulator.
make -n # dry-run to see tasks
make doctor # check env
make crawl # fetch links from endel.io/science
make ingest # normalize refs and download PDFs
make quotes # extract verbatim quotes with pages
make build # compile research/research.md dossier
make test # simple coverage checks
- Soft fades on start/stop (50–100 ms)
- Parameter ramps ≤20 ms
- Output ceiling ~−0.3 dBFS with soft clip
- DC blocker ~20 Hz
- Warnings for safe listening and epilepsy
- Stable preset order and file names
- Research outputs overwrite deterministically
- ISO8601 | component | level | message
- Lightweight rotation at ~1MB (.1 rollover)
See inline comments in sound/ (the Xcode project) and research/ for responsibilities. Docs in Docs/.
- If audio is silent: check iOS hardware mute, app background mode, and session category.
- If clicks: verify fades enabled, IO buffer 5–10 ms, and rates in safe bounds.
MIT. No data collected. See PRIVACY.