Accessible learning for children with Cortical Visual Impairment (CVI)
- Motivation and Background
- Features
- Screenshots
- Demo Video
- Technical Implementation
- How to Run
- Project Structure
- The Perkins CVI Protocol
- Future Steps
- License
VisionLearn is a cross-platform mobile application (iOS & Android) designed to support accessible learning experiences for children with Cortical/Cerebral Visual Impairment (CVI).
Cortical Visual Impairment (CVI) is the leading cause of visual impairment in children in developed countries. Unlike ocular blindness, CVI affects how the brain processes visual information. Children with CVI often have normal eye exams but struggle with visual recognition, attention, and processing.
Parents, teachers, and therapists working with CVI children need specialized learning tools that:
- Adapt to individual visual profiles
- Follow evidence-based protocols like the Perkins CVI framework
- Provide high contrast, simplified visuals
- Offer audio feedback and accessibility features
- Track progress for IEP meetings
Existing apps don't address these specific needs, forcing educators to manually create materials. VisionLearn solves this by providing AI-powered, adaptive learning activities that follow the Perkins CVI Protocol.
- Based on the Perkins 16 Visual Behaviors framework
- Customizable CVI phase selection (Phase I, II, III)
- Personalized color preferences (yellow, red, pink, blue, green, orange)
- Adjustable complexity and timing settings
- Image Recognition - Identify objects with audio feedback and encouragement
- Cause & Effect - Touch-response activities for engagement and learning
- Sorting - Group objects by category with visual feedback
- Matching - Find matching pairs with progressive difficulty
- Sequencing - Arrange items in order (numbers, patterns, sizes)
- Large touch targets (minimum 64dp)
- High contrast CVI-friendly color options
- Screen reader support (TalkBack/VoiceOver)
- Audio feedback for all interactions
- Text-to-Speech for instructions and encouragement
- Configurable response timeouts
- Simple, uncluttered visual design
- Dynamic question generation tailored to CVI needs
- Adaptive difficulty based on performance
- Personalized encouragement messages
- Fallback to local content when offline
- Create personalized activities with your own images
- Take photos or select from gallery
- Configure question text and correct answers
- Share activities across sessions
- Session history and statistics
- Accuracy tracking per module
- Total activities completed
- Visual progress dashboard
| Welcome | Child's Name | CVI Phase | Color Selection |
|---|---|---|---|
![]() |
![]() |
![]() |
![]() |
| Home Screen | AI Session Generation |
|---|---|
![]() |
![]() |
| Image Recognition | Cause & Effect | Sorting |
|---|---|---|
![]() |
![]() |
![]() |
| Matching | Sequencing |
|---|---|
![]() |
![]() |
| Progress Tracking | Content Creator | Visual Profile |
|---|---|---|
![]() |
![]() |
![]() |
| iOS AI Loading |
|---|
![]() |
| Component | Technology |
|---|---|
| UI | Compose Multiplatform 1.7.3 |
| Language | Kotlin 2.1.0 |
| Architecture | MVVM + Clean Architecture |
| Navigation | Voyager 1.1.0-beta02 |
| Database | SQLDelight 2.0.2 |
| Networking | Ktor 3.0.3 |
| DI | Koin 4.0.0 |
| Image Loading | Coil 3.0.4 |
| AI | Google Gemini API |
The app follows Clean Architecture principles with maximum code sharing:
- Domain Layer - Business models, repository interfaces
- Data Layer - Repository implementations, database, API clients
- Presentation Layer - ViewModels (ScreenModels), UI Components
- AI Layer - Gemini integration with local fallback
| Feature | Android | iOS |
|---|---|---|
| Database Driver | Android SQLite Driver | Native SQLite Driver |
| Text-to-Speech | Android TTS | AVSpeechSynthesizer |
| Image Picker | ActivityResultContracts | PHPickerViewController |
| Accessibility | TalkBack | VoiceOver |
VisionLearn uses Google Gemini API for intelligent content generation:
- GeminiSessionGenerator - Creates dynamic, CVI-appropriate questions
- LocalSessionGenerator - Offline fallback with pre-defined content
- Automatic fallback when API is unavailable
- Timeout handling for responsive UX
- Android Studio Hedgehog (2023.1.1) or later
- Xcode 15+ (for iOS)
- JDK 17
- KDoctor for environment verification
# Verify your environment
brew install kdoctor
kdoctorgit clone https://github.com/arulagarwal/VisionLearn.git
cd VisionLearnGet a free API key from Google AI Studio:
- Create/open
local.propertiesin the project root - Add your API key:
GEMINI_API_KEY=your_api_key_hereNote: The app works without an API key using local fallback content, but AI-generated questions provide a better experience.
# Build debug APK
./gradlew :composeApp:assembleDebug
# Install on connected device/emulator
./gradlew :composeApp:installDebugOr open in Android Studio and run the composeApp configuration.
# First, build the Kotlin framework
./gradlew :composeApp:linkDebugFrameworkIosSimulatorArm64
# Open Xcode project
open iosApp/iosApp.xcodeprojIn Xcode:
- Select an iPhone Simulator (iPhone 15 recommended)
- Press Cmd+R to build and run
Note: First iOS build takes 2-5 minutes to compile the Kotlin framework.
| Issue | Solution |
|---|---|
| "No such module 'ComposeApp'" in Xcode | Run ./gradlew :composeApp:linkDebugFrameworkIosSimulatorArm64 first |
| Gradle build fails with Java error | Ensure JAVA_HOME points to JDK 17 |
| iOS linker errors for sqlite3 | Clean build in Xcode (Cmd+Shift+K) and rebuild |
VisionLearn/
├── composeApp/
│ └── src/
│ ├── commonMain/ # Shared Kotlin code (95%+)
│ │ ├── kotlin/
│ │ │ ├── ai/ # AI services (Gemini + Local)
│ │ │ ├── data/ # Repositories, database
│ │ │ ├── di/ # Koin dependency injection
│ │ │ ├── domain/ # Models, interfaces
│ │ │ ├── platform/ # Expect declarations
│ │ │ └── presentation/ # Screens, components, themes
│ │ └── sqldelight/ # Database schema
│ ├── androidMain/ # Android implementations
│ │ └── kotlin/
│ │ ├── MainActivity.kt
│ │ ├── VisionLearnApplication.kt
│ │ └── platform/ # TTS, ImagePicker, etc.
│ └── iosMain/ # iOS implementations
│ └── kotlin/
│ ├── MainViewController.kt
│ └── platform/ # TTS, ImagePicker, etc.
├── iosApp/ # Xcode project
│ ├── iosApp/
│ │ ├── iOSApp.swift
│ │ ├── ContentView.swift
│ │ └── Info.plist
│ └── iosApp.xcodeproj/
├── gradle/
│ └── libs.versions.toml # Version catalog
├── screenshots/ # App screenshots
├── build.gradle.kts
├── settings.gradle.kts
└── local.properties # API keys (gitignored)
This app is designed around the Perkins CVI Protocol, a comprehensive educational assessment tool created by The CVI Center at Perkins School for the Blind. The protocol identifies 16 Visual Behaviors:
- Visual Attention
- Visual Recognition
- Visual Curiosity
- Object Complexity
- Array Complexity
- Sensory Complexity
- Light Preference
- Color Preference
- Movement Need
- Visual Latency
- Visual Field Preference
- Distance Viewing
- Visual-Motor Integration
- Visual Reflexes
- Novelty Tolerance
- Face Recognition
VisionLearn focuses on Color Preference and Object/Array Complexity - key factors that can be addressed through software adaptation.
Learn more: Perkins CVI Protocol
- Additional Visual Behaviors - Support more of the 16 behaviors
- AI Image Simplification - Automatically simplify complex images
- Switch Access - Full switch device support for motor impairments
- Multi-language Support - Localization for global accessibility
- Cloud Sync - Sync profiles and progress across devices
- Therapist Dashboard - Web portal for professionals
- Export Reports - Generate progress reports for IEP meetings
This project is licensed under the Apache License 2.0 - see the LICENSE file for details.
- Perkins School for the Blind for the CVI Protocol framework
- JetBrains for Kotlin Multiplatform
- Google for Gemini AI
Created for the KotlinConf 2026 Kotlin Multiplatform Contest














