DAW SDK: Complete architecture migration and refactor#15
Conversation
…ed performance - Upgraded `lucide-react` from version 0.544.0 to 0.548.0 in `bun.lock` and `package.json`. - Refactored DAWToolbar to streamline state management by removing unused grid state. - Enhanced TimelineGridCanvas and TimelineGridHeader to utilize a new cached time grid atom for better performance and accuracy in rendering timeline markers. - Introduced new utility functions for generating adaptive time grids, improving the overall responsiveness of the DAW interface.
…on grid mode - Updated `cachedTimeGridAtom` to only generate a time grid when the grid mode is set to "time", returning an empty grid otherwise. - Improved performance by preventing unnecessary calculations when the mode is not applicable.
- Removed `formatDuration` from `@/lib/storage/opfs` and integrated it into the `@/lib/daw-sdk/utils/time-utils.ts` for better organization. - Updated imports across various components to utilize the new location, ensuring consistent usage of the duration formatting utility. - Enhanced code clarity and maintainability by centralizing utility functions within the daw-sdk.
- Added `@wav0/daw-sdk` and `@wav0/daw-react` packages to improve modularity and framework-agnostic capabilities. - Updated `.gitignore` to exclude `dist` directory. - Enhanced `turbo.json` to include `dist/**` in build outputs for better deployment management. - Introduced `MIGRATION_STATUS.md` and `sdk-architecture-51061cf6.plan.md` files to document SDK architecture migration and implementation plans. - Refactored various components to utilize new SDK utilities for time formatting and volume management, improving code clarity and maintainability.
- Introduced `BRIDGE_USAGE_GUIDE.md` to provide comprehensive instructions on utilizing the bridge pattern during the SDK migration, including setup, component usage, and best practices. - Added `MIGRATION_PROGRESS.md` to track the status of the SDK architecture migration, detailing completed phases, tasks, and next steps for state and hook migration. - Created bridge classes (`AudioServiceBridge` and `PlaybackServiceBridge`) to facilitate compatibility between legacy services and the new SDK architecture. - Updated `DAWProvider` to support legacy service injection and bridge management, ensuring a smooth transition during the migration process. - Enhanced event synchronization between the legacy and new systems to maintain functionality throughout the migration. - Refactored various components and hooks to utilize the new bridge system, promoting a gradual migration path without breaking existing functionality.
- Added `OPFSManager` to facilitate audio file storage using the browser's Origin Private File System (OPFS). - Integrated `DAWProvider` into the application, enabling legacy service support and bridging for audio operations. - Enhanced `AudioEngine` with methods for saving, loading, and deleting audio files from OPFS, ensuring backward compatibility with legacy services. - Updated various components and hooks to utilize the new OPFS functionality, improving audio file management and persistence. - Documented migration progress and completed steps in `IMPLEMENTATION_STATUS.md`, outlining the current state and next steps for the SDK migration.
- Completed Phase 1 performance fixes, including optimizations for the `usePlaybackSync` hook and window resize handling, resulting in smoother playhead movement and reduced CPU usage. - Introduced new performance utilities in `performance.ts` for profiling and debouncing. - Developed the `useBridgeMutations` hook to facilitate stable mutation operations across the DAW, ensuring type safety and compatibility with both old and new SDKs. - Updated relevant components to utilize these enhancements, improving overall application performance and maintainability.
…tom Synchronization - Finalized performance fixes, including smooth playhead updates and debounced window resizing, resulting in significant CPU usage reduction. - Introduced new hooks for atom synchronization (`useDAWAtomSync`, `usePlaybackAtomSync`, `useTrackAtomSync`) to integrate legacy state management with the new event-driven SDK. - Enhanced components with memoization to prevent unnecessary re-renders, improving overall application responsiveness. - Documented migration progress and outlined next steps for remaining tasks, ensuring clarity in the migration process.
- Introduced `BUGS_FIXED.md` to document resolved issues, including parameter range conversions and build verification lessons learned. - Created `MIGRATION_COMPLETE_SUMMARY.md` to summarize critical issues resolved during SDK migration, detailing fixes and best practices established. - Added `UTILITY_AUDIT.md` to compare old and new SDK utility functions, highlighting missing functions and required actions for completion. - Updated various components to utilize new SDK utilities for volume and time management, enhancing code clarity and maintainability. - Refactored automation handling and curve evaluations to align with the new SDK structure, ensuring consistent functionality across the application.
- Resolved clip automation unpredictability by adjusting how clip-attached points are managed, ensuring they only transfer when relevant. - Removed same-track automation shifting, allowing clips to move independently without affecting their automation. - Implemented deduplication of automation points in `automation-lane.tsx` to prevent duplicate key warnings. - Updated `curve-preview.tsx` to correctly map `safeShape` from a 0-1 range to -99 to +99. - Verified and ensured the playback bridge functionality remains intact. - Progressed migration efforts while maintaining build stability.
- Updated clip movement logic to ensure automation points are correctly managed during clip updates, enhancing predictability. - Removed unnecessary automation shifting, allowing clips to move independently without affecting their automation. - Introduced new methods in the PlaybackService to track scheduling lead time, improving playback accuracy. - Enhanced automation transfer functions to support clip-relative timing, ensuring proper handling of automation points during clip movements. - Updated tests to verify the correct behavior of automation transfers and clip updates, ensuring robustness in the automation system.
- Documented identified bugs related to track-level automation transfer, overlapping automation segments, and visual playback accuracy. - Outlined specific implementation steps to address each issue, including code changes and test updates. - Emphasized the importance of ensuring audio playback aligns with visual cues and preventing automation overlaps during scheduling. - Included verification steps and future to-dos for ongoing improvements and refinements in the automation system.
- Introduced scheduling epsilon to prevent overlapping audio segments in the PlaybackService, improving playback accuracy. - Adjusted automation point handling during clip updates to ensure track-level automation remains stationary while moving clip-bound points. - Simplified automation transfer logic by removing unnecessary checks and ensuring proper handling of clip-relative timing. - Updated tests to verify the correct behavior of automation transfers and clip updates, ensuring robustness in the automation system.
- Introduced a high-precision playback time retrieval method in PlaybackService to improve visual sync with audio playback. - Updated the time update loop to ensure immediate visual updates upon playback start and prevent unnecessary updates when playback is paused. - Implemented a mechanism to allow the first time update to occur without throttling, enhancing responsiveness during playback. - Adjusted state management in playback atoms to reset update flags at appropriate boundaries, ensuring accurate time updates across sessions.
- Enhanced the time update callback to allow the first update even when the time hasn't changed, ensuring immediate visual feedback at playback start. - Improved handling of time updates to prevent unnecessary updates after the initial sync, optimizing performance during playback.
- Introduced `bindEnvelopeToClips` function to bind automation points to clips based on their position, ensuring accurate clip-relative timing. - Updated `updateClipAtom` and `updateTrackAtom` to utilize the new binding function, improving automation handling during clip updates. - Enhanced tests to verify correct binding behavior and ensure automation points are accurately managed when clips move or are deleted.
- Updated `mockGet` and `mockSet` functions in `automation-transfer.test.ts` to use arrow function syntax, improving type safety with `Getter` and `Setter` types from `jotai`. - Enhanced test clarity and maintainability by ensuring proper handling of atom reads and writes.
- Refactored `synchronizeTracks` method in `PlaybackService` to return a Promise, ensuring proper handling of asynchronous operations during track synchronization. - Updated `rescheduleTrack` to optionally accept all tracks for complete state synchronization, improving playback accuracy when clips are moved or modified. - Modified various atoms to utilize the updated synchronization method, ensuring consistent playback behavior and error handling during track updates. - Improved comments for clarity on synchronization logic and its impact on clip playback.
… resolution, and weekly review - Introduced command files to streamline development processes, including committing changes, creating pull requests, and removing unnecessary code. - Added detailed instructions for resolving merge conflicts and conducting weekly reviews of commits, enhancing overall project management and code quality. - These additions aim to improve developer efficiency and maintain codebase integrity.
- Enhanced the drag-and-drop functionality for clips by utilizing the latest state of tracks to avoid stale closures, ensuring accurate updates during clip movements. - Improved logic for handling same-track and cross-track moves, including automation transfer and clip updates, to maintain playback integrity. - Added comprehensive error handling and logging for drag commit operations, enhancing reliability during user interactions. - Updated state management to ensure atomic updates of tracks, preventing playback issues when clips are moved.
…onents - Adjusted formatting in various files to enhance code readability, including consistent indentation and spacing. - Updated JSX elements and function parameters for better alignment and clarity. - Ensured consistent use of arrow functions and improved the structure of conditional statements for better maintainability. - Minor adjustments to comments for clarity and to adhere to coding standards.
…unctionality - Updated import statements in various components to ensure consistency and clarity, removing redundant imports. - Refactored performance profiling utilities to simplify the API, replacing the class-based structure with functional exports for better usability. - Enhanced performance measurement functions to provide clearer logging and improved handling of performance marks. - Minor adjustments to CSS for better pointer event handling during drag operations.
- Modified the CSS selector for drag-and-drop functionality to ensure accurate pointer event management when dragging elements. - Enhanced specificity of the selector to prevent unintended interactions during drag events, improving user experience.
…ean up build artifacts - Modified cache key in CI workflow to exclude TypeScript build info files, preventing unnecessary cache invalidation. - Added a step to clean up any existing TypeScript build info files before linting, ensuring a clean build environment.
- Modified the "check-types" task to depend on both the "build" and "check-types" tasks, ensuring that type checks are performed after the build process, improving the overall workflow.
…tionality - Added `useEffectEvent` import to enhance event handling capabilities within the export dialog component. - Removed redundant import to streamline the code and improve clarity.
- Introduced useRef to store the latest options, preventing unnecessary re-renders when callbacks change. - Updated event handling to use the current options from the ref, ensuring accurate callback execution. - Enhanced error handling in useBridges to gracefully return null when the DAW context is not ready.
- Renamed `bun typecheck` to `bun check-types` for clarity in the AGENTS.md file. - Added `bun test` command to the AGENTS.md for running tests. - Updated LICENSE file to reflect the correct software name. - Included `test` command in package.json and turbo.json for improved testing workflow. - Refined README.md to enhance clarity and accessibility, including updates to project description and community links. - Removed outdated DAW component README to streamline documentation. - Introduced audio scheduling constants in daw-sdk for improved precision in audio handling.
- Added width and height attributes to the canvas element to ensure proper rendering dimensions. - Enhanced the layout structure for better responsiveness within the TimelineGridCanvas component.
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
|
Note Other AI code review bot(s) detectedCodeRabbit has detected other AI code review bot(s) in this pull request and will avoid duplicating their findings in the review comments. This may lead to a less comprehensive review. WalkthroughAdded a Transport scheduler and DAW provider/hooks, introduced precise audio-scheduling constants, replaced DOM/SVG timeline/grid with canvas implementations, added unified playhead and snap/granularity timebase, migrated many DAW utilities into a daw-sdk package, removed several legacy UI components, and updated documentation and tooling config. Changes
Sequence Diagram(s)sequenceDiagram
actor User
participant UI as DAW UI
participant Atoms as Atoms/TimeGrid
participant Transport as Transport
participant Engine as AudioEngine
participant WebAudio as Web Audio API
User->>UI: Press Play
UI->>Transport: play(clips, fromTime)
Transport->>Engine: request buffers (iterator)
Engine->>WebAudio: create BufferSourceNodes & schedule starts
WebAudio->>WebAudio: audio plays
par Live updates
Transport->>Atoms: update currentTime
Atoms->>UI: update playhead (uses snap/timebase)
end
sequenceDiagram
participant UI as DAW UI
participant Timebase as useTimebase
participant Playback as PlaybackService
participant WebAudio as Web Audio API
UI->>Timebase: compute snapped position (granularity/custom)
Timebase->>UI: return snapped time
UI->>Playback: rescheduleTrack(updatedTrack)
Playback->>WebAudio: cancelScheduledValues(from lookback)
Playback->>WebAudio: schedule automation (with epsilon guards)
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60–90 minutes Areas needing extra attention:
Possibly related PRs
Poem
Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (1 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
📜 Recent review detailsConfiguration used: CodeRabbit UI Review profile: CHILL Plan: Pro 📒 Files selected for processing (1)
🚧 Files skipped from review as they are similar to previous changes (1)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 29
🧹 Nitpick comments (12)
UTILITY_AUDIT.md (2)
1-1: Add a "Skip to content" link for better accessibility.Per the coding guidelines for Markdown files, include a "Skip to content" link at the top of the document to help with keyboard navigation and accessibility.
Apply this diff to add the skip link:
+[Skip to content](#main-content) + # Utility Audit ReportThen, add an anchor ID to the first major section:
-## Old SDK Utils vs New SDK Utils +## Old SDK Utils vs New SDK Utils {#main-content}
24-37: Clarify the distinction between time-utils.ts and time-grid.ts functions.The section header (line 24) refers to time-utils.ts functions, but the conclusion (line 37) refers to time-grid.ts. This creates ambiguity—readers may conflate the two sets of functions. Revise the conclusion to be explicit about which file's functions are actually used.
Apply this diff to improve clarity:
-**Conclusion**: `time-grid.ts` functions are actually used, not the deprecated ones +**Conclusion**: The functions listed above (from `time-utils.ts`) are deprecated; the `time-grid.ts` functions below are the ones actually used and must not be deletedapps/web/lib/daw-sdk/core/playback-shared.ts (1)
26-34: Migration looks correct, but consider adding segment curve support.The migration to the SDK's
curves.evaluateSegmentCurveis correct, and usingMIN_AUTOMATION_SEGMENT_DURATION_SECimproves maintainability.However, this function uses a hardcoded curve value of
0(linear), whilescheduleTrackEnvelopeInRangeproperly looks up segment curves fromenvelope.segments(line 70-73). For consistent behavior, consider looking up the actual segment curve here as well.Example refactor to add segment curve lookup:
for (let i = 0; i < points.length - 1; i++) { const a = points[i]; const b = points[i + 1]; + const seg = envelope.segments?.find( + (s) => s.fromPointId === a.id && s.toPointId === b.id, + ); + const curveValue = seg?.curve ?? 0; const segDur = Math.max( MIN_AUTOMATION_SEGMENT_DURATION_SEC, (b.time - a.time) / 1000, ); const segStart = startSec + a.time / 1000; const curve = new Float32Array(64); for (let s = 0; s < curve.length; s++) { const t = s / (curve.length - 1); - curve[s] = curves.evaluateSegmentCurve(a.value, b.value, t, 0); + curve[s] = curves.evaluateSegmentCurve(a.value, b.value, t, curveValue); } param.setValueCurveAtTime(curve, segStart, segDur); }.cursor/plans/sdk-architecture-51061cf6.plan.md (3)
30-36: Tests section is comprehensive; clarify file location and epsilon tolerance for dedupe test.The test cases cover all key scenarios (clip-attached, same-track, time-range, dedupe). Consider clarifying:
- Exact file path for
automation-transfer.test.ts(e.g.,apps/web/lib/daw-sdk/utils/...)- Epsilon tolerance for the dedupe test (line 36: "within epsilon") to ensure numerical stability across different time units
38-42: Migration safety strategy is sound; consider enumerating specific "direct playback calls" to replace.The gradual migration approach (keep atoms locally, use bridge incrementally, then relocate) aligns well with the PR's "incremental, safe" objective. One enhancement:
- Line 41 mentions replacing "direct playback calls" with bridge "where available"—consider adding a brief list (e.g.,
schedule(),synchronize(), etc.) so implementers know which calls to prioritize.
52-64: To-do items are comprehensive; clarify one compound task and one vague phrase.The rollout strategy is well-phased, and the to-dos are mostly actionable. Consider clarifying:
- Line 55 (item 2): "Remove shiftTrackAutomationInRange" — add a search step to identify all callsites before removal
- Line 58 (item 5): "fix data reads" is vague—could specify "ensure automation point reads use correct time reference (clip-relative vs absolute)"
- Lines 62–64 capture the final atom migration and legacy cleanup well
AGENTS.md (1)
182-182: Consider using markdown link format.The bare URL works but could be formatted as a markdown link for consistency with other documentation links.
-- WEB AUDIO API DOCS: https://webaudio.github.io/web-audio-api/ +- [WEB AUDIO API DOCS](https://webaudio.github.io/web-audio-api/)packages/daw-react/src/atoms/playback.ts (1)
9-12: Consider adding explicit type annotations to atoms.While Jotai infers types from initial values, explicit type annotations improve clarity and prevent accidental type changes during refactoring.
Apply this diff to add explicit types:
-export const isPlayingAtom = atom(false); -export const currentTimeAtom = atom(0); -export const bpmAtom = atom(120); -export const loopingAtom = atom(false); +export const isPlayingAtom = atom<boolean>(false); +export const currentTimeAtom = atom<number>(0); +export const bpmAtom = atom<number>(120); +export const loopingAtom = atom<boolean>(false);.cursor/commands/weekly-review.md (1)
1-7: Consider minor formatting improvements.The procedural outline is clear and serves its purpose well. However, consider these optional refinements:
- Add a trailing newline to the file (standard practice).
- Consider using hyphens instead of tildes for ranges (e.g., "7-10 days" instead of "7~10 days").
apps/web/components/daw/panels/track-grid-canvas.tsx (1)
19-68: LGTM! Solid implementation with proper ResizeObserver cleanup.The component correctly measures viewport height using ResizeObserver and properly cleans up on unmount. The data attribute selector approach for finding the scroll container is appropriate.
Optional refactor: Consider replacing inline styles (lines 58-62) with Tailwind classes for consistency with the coding guidelines, though the dynamic values (timelineWidth, viewportHeight) may justify the inline style approach here.
packages/daw-sdk/src/types/core.ts (1)
29-34: Use the MediaBunny typings forLoadedTrack.We already expose
LoadedTrackincore/audio-engine.tswith the real Mediabunny types. Re‑declaring it here withanysilently drops all the type safety we rely on for engine integrations. Please import the actual MediaBunny types and keep the public surface consistent.Based on learnings
+import type { + AudioBufferSink, + Input, + InputAudioTrack, +} from "mediabunny"; + export interface LoadedTrack { id: string; - input: any; // MediaBunny Input - sink: any; // MediaBunny AudioBufferSink - audioTrack: any; // MediaBunny InputAudioTrack + input: Input; + sink: AudioBufferSink; + audioTrack: InputAudioTrack; duration: number; }packages/daw-react/src/hooks/use-atom-sync.ts (1)
71-87: Consider more specific type for event detail.The
handleTrackLoadedusesanyfor the event detail type. Consider using the properAudioDatatype from the SDK for better type safety.Apply this diff to improve type safety:
+import type { AudioData } from "@wav0/daw-sdk"; + // Non-reactive track loaded handler - const handleTrackLoaded = useEffectEvent((event: CustomEvent<any>) => { - const { id, duration, sampleRate, numberOfChannels } = event.detail; + const handleTrackLoaded = useEffectEvent( + (event: CustomEvent<{ audioId: string; audioData: AudioData }>) => { + const { audioId: id, audioData } = event.detail; + const { duration, sampleRate, numberOfChannels } = audioData; - // Update tracks atom with new audio info - const updatedTracks = tracks.map((track: any) => - track.id === id - ? { - ...track, - duration, - sampleRate, - numberOfChannels, - } - : track, - ); + // Update tracks atom with new audio info + const updatedTracks = tracks.map((track: any) => + track.id === id + ? { + ...track, + duration, + sampleRate, + numberOfChannels, + } + : track, + ); - setTracks(updatedTracks); - }); + setTracks(updatedTracks); + }, + );
| - 2~3 bullet points (< 80 characters) with a quick description | ||
|
|
||
| ## Notes | ||
| - You should only commit work when instructed. Do not keep committing subsquent work unless explicitly told so |
There was a problem hiding this comment.
Fix typo in documentation.
The word "subsquent" should be "subsequent".
Apply this diff:
-- You should only commit work when instructed. Do not keep committing subsquent work unless explicitly told so
+- You should only commit work when instructed. Do not keep committing subsequent work unless explicitly told so📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| - You should only commit work when instructed. Do not keep committing subsquent work unless explicitly told so | |
| - You should only commit work when instructed. Do not keep committing subsequent work unless explicitly told so |
🧰 Tools
🪛 LanguageTool
[grammar] ~15-~15: Ensure spelling is correct
Context: ...when instructed. Do not keep committing subsquent work unless explicitly told so Optiona...
(QB_NEW_EN_ORTHOGRAPHY_ERROR_IDS_1)
🤖 Prompt for AI Agents
In .cursor/commands/commit.md around line 15, there is a typo: "subsquent"
should be corrected to "subsequent"; update that single word in the sentence to
fix the spelling so the line reads "...Do not keep committing subsequent work
unless explicitly told so."
| Check the diff against main, and remove all AI generated slop introduced in this branch. | ||
|
|
||
| This includes: | ||
| - Extra comments that a human wouldn't add or is inconsistent with the rest of the file | ||
| - Extra defensive checks or try/catch blocks that are abnormal for that area of the codebase (especially if called by trusted / validated codepaths) | ||
| - Casts to any to get around type issues | ||
| - Any other style that is inconsistent with the file |
There was a problem hiding this comment.
Hyphenate “AI-generated” for clarity
Minor wording nit: “AI-generated” should be hyphenated. Suggested edit:
-Check the diff against main, and remove all AI generated slop introduced in this branch.
+Check the diff against main, and remove all AI-generated slop introduced in this branch.📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| Check the diff against main, and remove all AI generated slop introduced in this branch. | |
| This includes: | |
| - Extra comments that a human wouldn't add or is inconsistent with the rest of the file | |
| - Extra defensive checks or try/catch blocks that are abnormal for that area of the codebase (especially if called by trusted / validated codepaths) | |
| - Casts to any to get around type issues | |
| - Any other style that is inconsistent with the file | |
| Check the diff against main, and remove all AI-generated slop introduced in this branch. | |
| This includes: | |
| - Extra comments that a human wouldn't add or is inconsistent with the rest of the file | |
| - Extra defensive checks or try/catch blocks that are abnormal for that area of the codebase (especially if called by trusted / validated codepaths) | |
| - Casts to any to get around type issues | |
| - Any other style that is inconsistent with the file |
🧰 Tools
🪛 LanguageTool
[grammar] ~3-~3: Use a hyphen to join words.
Context: ...the diff against main, and remove all AI generated slop introduced in this branch...
(QB_NEW_EN_HYPHEN)
🤖 Prompt for AI Agents
.cursor/commands/deslop.md around lines 3 to 9: the phrase "AI generated" should
be hyphenated for clarity and consistency; update the text to "AI-generated"
(and search the surrounding lines for any other instances in this file or the
diff and replace them too) while keeping the rest of the wording unchanged.
| import { volume } from "@wav0/daw-sdk"; | ||
| import { useEffect, useState } from "react"; |
There was a problem hiding this comment.
Reorder imports per coding guidelines.
React imports should precede third-party imports.
Apply this diff:
-import { volume } from "@wav0/daw-sdk";
import { useEffect, useState } from "react";
+import { volume } from "@wav0/daw-sdk";📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| import { volume } from "@wav0/daw-sdk"; | |
| import { useEffect, useState } from "react"; | |
| import { useEffect, useState } from "react"; | |
| import { volume } from "@wav0/daw-sdk"; |
🤖 Prompt for AI Agents
In apps/web/components/daw/context-menus/track-context-menu.tsx around lines 3
to 4, the imports are out of order: third-party "@wav0/daw-sdk" is imported
before React; reorder so React imports (useEffect, useState from "react") come
first, followed by the "@wav0/daw-sdk" import, and save/run the linter to ensure
import-order rules are satisfied.
| const VOLUME_MIN_DB = -60; | ||
| const VOLUME_MAX_DB = 12; | ||
| const clampDb = (db: number) => | ||
| Math.max(VOLUME_MIN_DB, Math.min(VOLUME_MAX_DB, db)); |
There was a problem hiding this comment.
🧩 Analysis chain
Consider using SDK clampDb to avoid duplication.
The local clampDb function and constants duplicate functionality already provided by volume.clampDb in the SDK. The SDK version also handles non-finite numbers more robustly by returning Number.NEGATIVE_INFINITY.
Verify whether the SDK's MIN_DB and MAX_DB constants match the local values (-60, 12). If they do, prefer the SDK implementation:
If the constants match, apply this diff:
-const VOLUME_MIN_DB = -60;
-const VOLUME_MAX_DB = 12;
-const clampDb = (db: number) =>
- Math.max(VOLUME_MIN_DB, Math.min(VOLUME_MAX_DB, db));Then update the imports:
-import { volume } from "@wav0/daw-sdk";
+import { volume } from "@wav0/daw-sdk";
+
+const { MIN_DB: VOLUME_MIN_DB, MAX_DB: VOLUME_MAX_DB, clampDb } = volume;🏁 Script executed:
#!/bin/bash
# Check SDK volume constants and clampDb export
cat packages/daw-sdk/src/utils/volume.ts | grep -E '(MIN_DB|MAX_DB|clampDb)' -A 2Length of output: 500
🏁 Script executed:
#!/bin/bash
# Check if SDK exports automation clampDb function
cat packages/daw-sdk/src/utils/volume.ts | grep -E '(automation.*clamp|clamp.*automation)' -i -B 2 -A 5Length of output: 241
🏁 Script executed:
#!/bin/bash
# Show full volume.ts to understand export structure
cat packages/daw-sdk/src/utils/volume.ts | head -100Length of output: 2604
🏁 Script executed:
#!/bin/bash
# Check usage context in track-context-menu.tsx around lines 75, 87 to confirm automation vs track
cat apps/web/components/daw/context-menus/track-context-menu.tsx | grep -E '(clampDb|VOLUME_MIN|VOLUME_MAX|automation|track)' -n -B 3 -A 3 | head -60Length of output: 1543
🏁 Script executed:
#!/bin/bash
# Check what onSetVolumeDb does and find its usage pattern
rg -n 'onSetVolumeDb|currentDb' apps/web/components/daw/context-menus/track-context-menu.tsx -B 2 -A 2Length of output: 2350
🏁 Script executed:
#!/bin/bash
# Search for similar volume controls in other files to understand pattern
rg -n 'VOLUME_MIN_DB|VOLUME_MAX_DB|clampDb' apps/web --type=tsx --type=ts -l | head -10Length of output: 27
🏁 Script executed:
#!/bin/bash
# Check imports and track data structure to see if this is actually volume or automation
cat apps/web/components/daw/context-menus/track-context-menu.tsx | head -40Length of output: 939
| const themeColors = useMemo(() => { | ||
| if (!canvasRef.current) return null; | ||
|
|
||
| const styles = getComputedStyle(canvasRef.current); | ||
| return { | ||
| sub: | ||
| styles.getPropertyValue("--timeline-grid-sub").trim() || | ||
| minor: | ||
| styles.getPropertyValue("--timeline-grid-minor").trim() || | ||
| "rgba(255,255,255,0.15)", | ||
| beat: | ||
| styles.getPropertyValue("--timeline-grid-beat").trim() || | ||
| "rgba(255,255,255,0.3)", | ||
| measure: | ||
| styles.getPropertyValue("--timeline-grid-measure").trim() || | ||
| "rgba(255,255,255,0.5)", | ||
| label: | ||
| styles.getPropertyValue("--timeline-grid-label").trim() || | ||
| "rgba(255,255,255,0.7)", | ||
| major: | ||
| styles.getPropertyValue("--timeline-grid-major").trim() || | ||
| "rgba(255,255,255,0.4)", | ||
| }; | ||
| }, []); // Only compute once on mount | ||
| }, []); | ||
|
|
||
| // Initialize controller | ||
| useEffect(() => { | ||
| useLayoutEffect(() => { | ||
| const canvas = canvasRef.current; | ||
| if (!canvas) return; | ||
| if (!canvas || !themeColors) return; | ||
|
|
||
| controllerRef.current = new CanvasGridController(canvas); | ||
| const ctx = canvas.getContext("2d"); | ||
| if (!ctx) return; | ||
|
|
||
| return () => { | ||
| if (controllerRef.current) { | ||
| controllerRef.current.dispose(); | ||
| controllerRef.current = null; | ||
| } | ||
| }; | ||
| }, []); | ||
| ctx.clearRect(0, 0, width, height); | ||
|
|
||
| const viewportStart = scrollLeft / pxPerMs; | ||
| const viewportEnd = (scrollLeft + width) / pxPerMs; | ||
|
|
||
| // Draw grid when dependencies change | ||
| useEffect(() => { | ||
| const controller = controllerRef.current; | ||
| if (!controller || !themeColors) return; | ||
| ctx.strokeStyle = themeColors.minor; | ||
| ctx.lineWidth = 1; | ||
| ctx.beginPath(); | ||
| for (const ms of timeGrid.minors) { | ||
| if (ms < viewportStart || ms > viewportEnd) continue; | ||
| const x = Math.round(ms * pxPerMs); | ||
| ctx.moveTo(x, 0); | ||
| ctx.lineTo(x, height); | ||
| } | ||
| ctx.stroke(); | ||
|
|
||
| controller.draw({ | ||
| width, | ||
| height, | ||
| pxPerMs: deferredPxPerMs, | ||
| scrollLeft: deferredScrollLeft, | ||
| grid, | ||
| themeColors, | ||
| }); | ||
| }, [width, height, deferredPxPerMs, deferredScrollLeft, grid, themeColors]); | ||
| ctx.strokeStyle = themeColors.major; | ||
| ctx.lineWidth = 1; | ||
| ctx.beginPath(); | ||
| for (const marker of timeGrid.majors) { | ||
| if (marker.ms < viewportStart || marker.ms > viewportEnd) continue; | ||
| const x = Math.round(marker.ms * pxPerMs); | ||
| ctx.moveTo(x, 0); | ||
| ctx.lineTo(x, height); | ||
| } | ||
| ctx.stroke(); | ||
| }, [width, height, pxPerMs, scrollLeft, timeGrid, themeColors]); |
There was a problem hiding this comment.
Canvas never draws because themeColors is frozen as null.
useMemo runs before the ref resolves, so it caches null forever. Because themeColors stays null, the layout effect exits early and the grid never renders. Compute the theme colors inside the effect (or recompute whenever the canvas ref is ready) so the draw path actually runs.
- const themeColors = useMemo(() => {
- if (!canvasRef.current) return null;
-
- const styles = getComputedStyle(canvasRef.current);
- return {
- minor:
- styles.getPropertyValue("--timeline-grid-minor").trim() ||
- "rgba(255,255,255,0.15)",
- major:
- styles.getPropertyValue("--timeline-grid-major").trim() ||
- "rgba(255,255,255,0.4)",
- };
- }, []);
-
- useLayoutEffect(() => {
- const canvas = canvasRef.current;
- if (!canvas || !themeColors) return;
+ useLayoutEffect(() => {
+ const canvas = canvasRef.current;
+ if (!canvas) return;
- const ctx = canvas.getContext("2d");
+ const ctx = canvas.getContext("2d");
if (!ctx) return;
+ const styles = getComputedStyle(canvas);
+ const themeColors = {
+ minor:
+ styles.getPropertyValue("--timeline-grid-minor").trim() ||
+ "rgba(255,255,255,0.15)",
+ major:
+ styles.getPropertyValue("--timeline-grid-major").trim() ||
+ "rgba(255,255,255,0.4)",
+ };
+
ctx.clearRect(0, 0, width, height);
@@
- }, [width, height, pxPerMs, scrollLeft, timeGrid, themeColors]);
+ }, [width, height, pxPerMs, scrollLeft, timeGrid]);[scratchpad_start]
[done]
Canvas issue recorded.
[scratchpad_end] -->
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const themeColors = useMemo(() => { | |
| if (!canvasRef.current) return null; | |
| const styles = getComputedStyle(canvasRef.current); | |
| return { | |
| sub: | |
| styles.getPropertyValue("--timeline-grid-sub").trim() || | |
| minor: | |
| styles.getPropertyValue("--timeline-grid-minor").trim() || | |
| "rgba(255,255,255,0.15)", | |
| beat: | |
| styles.getPropertyValue("--timeline-grid-beat").trim() || | |
| "rgba(255,255,255,0.3)", | |
| measure: | |
| styles.getPropertyValue("--timeline-grid-measure").trim() || | |
| "rgba(255,255,255,0.5)", | |
| label: | |
| styles.getPropertyValue("--timeline-grid-label").trim() || | |
| "rgba(255,255,255,0.7)", | |
| major: | |
| styles.getPropertyValue("--timeline-grid-major").trim() || | |
| "rgba(255,255,255,0.4)", | |
| }; | |
| }, []); // Only compute once on mount | |
| }, []); | |
| // Initialize controller | |
| useEffect(() => { | |
| useLayoutEffect(() => { | |
| const canvas = canvasRef.current; | |
| if (!canvas) return; | |
| if (!canvas || !themeColors) return; | |
| controllerRef.current = new CanvasGridController(canvas); | |
| const ctx = canvas.getContext("2d"); | |
| if (!ctx) return; | |
| return () => { | |
| if (controllerRef.current) { | |
| controllerRef.current.dispose(); | |
| controllerRef.current = null; | |
| } | |
| }; | |
| }, []); | |
| ctx.clearRect(0, 0, width, height); | |
| const viewportStart = scrollLeft / pxPerMs; | |
| const viewportEnd = (scrollLeft + width) / pxPerMs; | |
| // Draw grid when dependencies change | |
| useEffect(() => { | |
| const controller = controllerRef.current; | |
| if (!controller || !themeColors) return; | |
| ctx.strokeStyle = themeColors.minor; | |
| ctx.lineWidth = 1; | |
| ctx.beginPath(); | |
| for (const ms of timeGrid.minors) { | |
| if (ms < viewportStart || ms > viewportEnd) continue; | |
| const x = Math.round(ms * pxPerMs); | |
| ctx.moveTo(x, 0); | |
| ctx.lineTo(x, height); | |
| } | |
| ctx.stroke(); | |
| controller.draw({ | |
| width, | |
| height, | |
| pxPerMs: deferredPxPerMs, | |
| scrollLeft: deferredScrollLeft, | |
| grid, | |
| themeColors, | |
| }); | |
| }, [width, height, deferredPxPerMs, deferredScrollLeft, grid, themeColors]); | |
| ctx.strokeStyle = themeColors.major; | |
| ctx.lineWidth = 1; | |
| ctx.beginPath(); | |
| for (const marker of timeGrid.majors) { | |
| if (marker.ms < viewportStart || marker.ms > viewportEnd) continue; | |
| const x = Math.round(marker.ms * pxPerMs); | |
| ctx.moveTo(x, 0); | |
| ctx.lineTo(x, height); | |
| } | |
| ctx.stroke(); | |
| }, [width, height, pxPerMs, scrollLeft, timeGrid, themeColors]); | |
| useLayoutEffect(() => { | |
| const canvas = canvasRef.current; | |
| if (!canvas) return; | |
| const ctx = canvas.getContext("2d"); | |
| if (!ctx) return; | |
| const styles = getComputedStyle(canvas); | |
| const themeColors = { | |
| minor: | |
| styles.getPropertyValue("--timeline-grid-minor").trim() || | |
| "rgba(255,255,255,0.15)", | |
| major: | |
| styles.getPropertyValue("--timeline-grid-major").trim() || | |
| "rgba(255,255,255,0.4)", | |
| }; | |
| ctx.clearRect(0, 0, width, height); | |
| const viewportStart = scrollLeft / pxPerMs; | |
| const viewportEnd = (scrollLeft + width) / pxPerMs; | |
| ctx.strokeStyle = themeColors.minor; | |
| ctx.lineWidth = 1; | |
| ctx.beginPath(); | |
| for (const ms of timeGrid.minors) { | |
| if (ms < viewportStart || ms > viewportEnd) continue; | |
| const x = Math.round(ms * pxPerMs); | |
| ctx.moveTo(x, 0); | |
| ctx.lineTo(x, height); | |
| } | |
| ctx.stroke(); | |
| ctx.strokeStyle = themeColors.major; | |
| ctx.lineWidth = 1; | |
| ctx.beginPath(); | |
| for (const marker of timeGrid.majors) { | |
| if (marker.ms < viewportStart || marker.ms > viewportEnd) continue; | |
| const x = Math.round(marker.ms * pxPerMs); | |
| ctx.moveTo(x, 0); | |
| ctx.lineTo(x, height); | |
| } | |
| ctx.stroke(); | |
| }, [width, height, pxPerMs, scrollLeft, timeGrid]); |
🤖 Prompt for AI Agents
In apps/web/components/daw/panels/timeline-grid-canvas.tsx around lines 27 to
74, themeColors is computed with useMemo before canvasRef.current exists causing
it to be null forever and preventing drawing; move the theme color computation
into the useLayoutEffect (or make useMemo depend on canvasRef.current) so
themeColors is derived after the ref is available, then proceed to get the 2D
context and draw — specifically, inside the effect read
getComputedStyle(canvasRef.current), build the minor/major color values, and
only then run the clearRect and stroke logic (or add canvasRef.current to the
useMemo deps and handle updates accordingly).
| export class AudioEngine extends EventTarget { | ||
| private loadedTracks = new Map<string, LoadedTrack>(); | ||
|
|
||
| constructor( | ||
| private audioContext: AudioContext, | ||
| private opfsManager?: OPFSManager, | ||
| ) { | ||
| super(); | ||
| } | ||
|
|
||
| async loadAudio(file: File, id: string): Promise<AudioData> { | ||
| const input = new Input({ | ||
| formats: ALL_FORMATS, | ||
| source: new BlobSource(file), | ||
| }); | ||
|
|
||
| const audioTrack = await input.getPrimaryAudioTrack(); | ||
| if (!audioTrack) throw new Error("No audio track found"); | ||
|
|
||
| const sink = new AudioBufferSink(audioTrack); | ||
| const duration = await audioTrack.computeDuration(); | ||
|
|
||
| this.loadedTracks.set(id, { | ||
| id, | ||
| input, | ||
| sink, | ||
| audioTrack, | ||
| duration, | ||
| }); | ||
|
|
||
| // Emit event for persistence layer | ||
| this.dispatchEvent( | ||
| new CustomEvent("trackloaded", { | ||
| detail: { | ||
| id, | ||
| fileName: file.name, | ||
| size: file.size, | ||
| duration, | ||
| sampleRate: audioTrack.sampleRate, | ||
| }, | ||
| }), | ||
| ); | ||
|
|
||
| return { | ||
| id, | ||
| duration, | ||
| sampleRate: audioTrack.sampleRate, | ||
| numberOfChannels: audioTrack.numberOfChannels, | ||
| }; | ||
| } | ||
|
|
||
| async getBufferIterator( | ||
| audioId: string, | ||
| startTime: number = 0, | ||
| endTime?: number, | ||
| ): Promise< | ||
| AsyncIterableIterator<{ buffer: AudioBuffer; timestamp: number }> | ||
| > { | ||
| const track = this.loadedTracks.get(audioId); | ||
| if (!track) throw new Error(`Audio ${audioId} not loaded`); | ||
|
|
||
| return track.sink.buffers(startTime, endTime); | ||
| } | ||
|
|
||
| getTrack(audioId: string): LoadedTrack | undefined { | ||
| return this.loadedTracks.get(audioId); | ||
| } | ||
|
|
||
| hasTrack(audioId: string): boolean { | ||
| return this.loadedTracks.has(audioId); | ||
| } | ||
|
|
||
| async saveToOPFS(audioId: string, buffer: ArrayBuffer): Promise<void> { | ||
| if (!this.opfsManager) { | ||
| throw new Error("OPFS manager not configured"); | ||
| } | ||
| await this.opfsManager.saveAudioFile(audioId, buffer); | ||
| } | ||
|
|
||
| async loadFromOPFS( | ||
| audioId: string, | ||
| fileName: string, | ||
| ): Promise<AudioData | null> { | ||
| if (!this.opfsManager) { | ||
| throw new Error("OPFS manager not configured"); | ||
| } | ||
|
|
||
| const arrayBuffer = await this.opfsManager.loadAudioFile(audioId); | ||
| if (!arrayBuffer) { | ||
| return null; | ||
| } | ||
|
|
||
| const input = new Input({ | ||
| formats: ALL_FORMATS, | ||
| source: new BlobSource(new Blob([arrayBuffer])), | ||
| }); | ||
|
|
||
| const audioTrack = await input.getPrimaryAudioTrack(); | ||
| if (!audioTrack) throw new Error("No audio track in OPFS file"); | ||
|
|
||
| const sink = new AudioBufferSink(audioTrack); | ||
| const duration = await audioTrack.computeDuration(); | ||
|
|
||
| this.loadedTracks.set(audioId, { | ||
| id: audioId, | ||
| input, | ||
| sink, | ||
| audioTrack, | ||
| duration, | ||
| }); | ||
|
|
||
| return { | ||
| id: audioId, | ||
| duration, | ||
| sampleRate: audioTrack.sampleRate, | ||
| numberOfChannels: audioTrack.numberOfChannels, | ||
| }; | ||
| } | ||
|
|
||
| async deleteFromOPFS(audioId: string): Promise<void> { | ||
| if (!this.opfsManager) { | ||
| throw new Error("OPFS manager not configured"); | ||
| } | ||
| await this.opfsManager.deleteAudioFile(audioId); | ||
| this.loadedTracks.delete(audioId); | ||
| } | ||
|
|
||
| dispose(): void { | ||
| // MediaBunny resources are garbage collected | ||
| this.loadedTracks.clear(); | ||
| } |
There was a problem hiding this comment.
Expose an unload hook on AudioEngine.
There’s no way to drop a loaded track without also deleting OPFS data. As soon as the bridge calls loadAudio, the loadedTracks map hangs onto the MediaBunny graph forever, so callers can’t free memory. Please add a lightweight unloadTrack(audioId: string) (delete from the map and tear down any sinks if needed) so consumers can balance loadAudio.
async deleteFromOPFS(audioId: string): Promise<void> {
if (!this.opfsManager) {
throw new Error("OPFS manager not configured")
}
await this.opfsManager.deleteAudioFile(audioId)
this.loadedTracks.delete(audioId)
}
+
+ unloadTrack(audioId: string): void {
+ this.loadedTracks.delete(audioId)
+ }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| export class AudioEngine extends EventTarget { | |
| private loadedTracks = new Map<string, LoadedTrack>(); | |
| constructor( | |
| private audioContext: AudioContext, | |
| private opfsManager?: OPFSManager, | |
| ) { | |
| super(); | |
| } | |
| async loadAudio(file: File, id: string): Promise<AudioData> { | |
| const input = new Input({ | |
| formats: ALL_FORMATS, | |
| source: new BlobSource(file), | |
| }); | |
| const audioTrack = await input.getPrimaryAudioTrack(); | |
| if (!audioTrack) throw new Error("No audio track found"); | |
| const sink = new AudioBufferSink(audioTrack); | |
| const duration = await audioTrack.computeDuration(); | |
| this.loadedTracks.set(id, { | |
| id, | |
| input, | |
| sink, | |
| audioTrack, | |
| duration, | |
| }); | |
| // Emit event for persistence layer | |
| this.dispatchEvent( | |
| new CustomEvent("trackloaded", { | |
| detail: { | |
| id, | |
| fileName: file.name, | |
| size: file.size, | |
| duration, | |
| sampleRate: audioTrack.sampleRate, | |
| }, | |
| }), | |
| ); | |
| return { | |
| id, | |
| duration, | |
| sampleRate: audioTrack.sampleRate, | |
| numberOfChannels: audioTrack.numberOfChannels, | |
| }; | |
| } | |
| async getBufferIterator( | |
| audioId: string, | |
| startTime: number = 0, | |
| endTime?: number, | |
| ): Promise< | |
| AsyncIterableIterator<{ buffer: AudioBuffer; timestamp: number }> | |
| > { | |
| const track = this.loadedTracks.get(audioId); | |
| if (!track) throw new Error(`Audio ${audioId} not loaded`); | |
| return track.sink.buffers(startTime, endTime); | |
| } | |
| getTrack(audioId: string): LoadedTrack | undefined { | |
| return this.loadedTracks.get(audioId); | |
| } | |
| hasTrack(audioId: string): boolean { | |
| return this.loadedTracks.has(audioId); | |
| } | |
| async saveToOPFS(audioId: string, buffer: ArrayBuffer): Promise<void> { | |
| if (!this.opfsManager) { | |
| throw new Error("OPFS manager not configured"); | |
| } | |
| await this.opfsManager.saveAudioFile(audioId, buffer); | |
| } | |
| async loadFromOPFS( | |
| audioId: string, | |
| fileName: string, | |
| ): Promise<AudioData | null> { | |
| if (!this.opfsManager) { | |
| throw new Error("OPFS manager not configured"); | |
| } | |
| const arrayBuffer = await this.opfsManager.loadAudioFile(audioId); | |
| if (!arrayBuffer) { | |
| return null; | |
| } | |
| const input = new Input({ | |
| formats: ALL_FORMATS, | |
| source: new BlobSource(new Blob([arrayBuffer])), | |
| }); | |
| const audioTrack = await input.getPrimaryAudioTrack(); | |
| if (!audioTrack) throw new Error("No audio track in OPFS file"); | |
| const sink = new AudioBufferSink(audioTrack); | |
| const duration = await audioTrack.computeDuration(); | |
| this.loadedTracks.set(audioId, { | |
| id: audioId, | |
| input, | |
| sink, | |
| audioTrack, | |
| duration, | |
| }); | |
| return { | |
| id: audioId, | |
| duration, | |
| sampleRate: audioTrack.sampleRate, | |
| numberOfChannels: audioTrack.numberOfChannels, | |
| }; | |
| } | |
| async deleteFromOPFS(audioId: string): Promise<void> { | |
| if (!this.opfsManager) { | |
| throw new Error("OPFS manager not configured"); | |
| } | |
| await this.opfsManager.deleteAudioFile(audioId); | |
| this.loadedTracks.delete(audioId); | |
| } | |
| dispose(): void { | |
| // MediaBunny resources are garbage collected | |
| this.loadedTracks.clear(); | |
| } | |
| export class AudioEngine extends EventTarget { | |
| private loadedTracks = new Map<string, LoadedTrack>(); | |
| constructor( | |
| private audioContext: AudioContext, | |
| private opfsManager?: OPFSManager, | |
| ) { | |
| super(); | |
| } | |
| async loadAudio(file: File, id: string): Promise<AudioData> { | |
| const input = new Input({ | |
| formats: ALL_FORMATS, | |
| source: new BlobSource(file), | |
| }); | |
| const audioTrack = await input.getPrimaryAudioTrack(); | |
| if (!audioTrack) throw new Error("No audio track found"); | |
| const sink = new AudioBufferSink(audioTrack); | |
| const duration = await audioTrack.computeDuration(); | |
| this.loadedTracks.set(id, { | |
| id, | |
| input, | |
| sink, | |
| audioTrack, | |
| duration, | |
| }); | |
| // Emit event for persistence layer | |
| this.dispatchEvent( | |
| new CustomEvent("trackloaded", { | |
| detail: { | |
| id, | |
| fileName: file.name, | |
| size: file.size, | |
| duration, | |
| sampleRate: audioTrack.sampleRate, | |
| }, | |
| }), | |
| ); | |
| return { | |
| id, | |
| duration, | |
| sampleRate: audioTrack.sampleRate, | |
| numberOfChannels: audioTrack.numberOfChannels, | |
| }; | |
| } | |
| async getBufferIterator( | |
| audioId: string, | |
| startTime: number = 0, | |
| endTime?: number, | |
| ): Promise< | |
| AsyncIterableIterator<{ buffer: AudioBuffer; timestamp: number }> | |
| > { | |
| const track = this.loadedTracks.get(audioId); | |
| if (!track) throw new Error(`Audio ${audioId} not loaded`); | |
| return track.sink.buffers(startTime, endTime); | |
| } | |
| getTrack(audioId: string): LoadedTrack | undefined { | |
| return this.loadedTracks.get(audioId); | |
| } | |
| hasTrack(audioId: string): boolean { | |
| return this.loadedTracks.has(audioId); | |
| } | |
| async saveToOPFS(audioId: string, buffer: ArrayBuffer): Promise<void> { | |
| if (!this.opfsManager) { | |
| throw new Error("OPFS manager not configured"); | |
| } | |
| await this.opfsManager.saveAudioFile(audioId, buffer); | |
| } | |
| async loadFromOPFS( | |
| audioId: string, | |
| fileName: string, | |
| ): Promise<AudioData | null> { | |
| if (!this.opfsManager) { | |
| throw new Error("OPFS manager not configured"); | |
| } | |
| const arrayBuffer = await this.opfsManager.loadAudioFile(audioId); | |
| if (!arrayBuffer) { | |
| return null; | |
| } | |
| const input = new Input({ | |
| formats: ALL_FORMATS, | |
| source: new BlobSource(new Blob([arrayBuffer])), | |
| }); | |
| const audioTrack = await input.getPrimaryAudioTrack(); | |
| if (!audioTrack) throw new Error("No audio track in OPFS file"); | |
| const sink = new AudioBufferSink(audioTrack); | |
| const duration = await audioTrack.computeDuration(); | |
| this.loadedTracks.set(audioId, { | |
| id: audioId, | |
| input, | |
| sink, | |
| audioTrack, | |
| duration, | |
| }); | |
| return { | |
| id: audioId, | |
| duration, | |
| sampleRate: audioTrack.sampleRate, | |
| numberOfChannels: audioTrack.numberOfChannels, | |
| }; | |
| } | |
| async deleteFromOPFS(audioId: string): Promise<void> { | |
| if (!this.opfsManager) { | |
| throw new Error("OPFS manager not configured"); | |
| } | |
| await this.opfsManager.deleteAudioFile(audioId); | |
| this.loadedTracks.delete(audioId); | |
| } | |
| unloadTrack(audioId: string): void { | |
| this.loadedTracks.delete(audioId) | |
| } | |
| dispose(): void { | |
| // MediaBunny resources are garbage collected | |
| this.loadedTracks.clear(); | |
| } | |
| } |
🤖 Prompt for AI Agents
In packages/daw-sdk/src/core/audio-engine.ts around lines 28 to 158, add a new
public unloadTrack(audioId: string): void method that removes the LoadedTrack
from loadedTracks and tears down any associated resources: if the track exists,
call any available cleanup on the sink (e.g., sink.close(), sink.destroy(), or
sink.stop() — choose the correct API provided by AudioBufferSink), null out or
dereference input/sink/audioTrack to allow GC, delete the entry from the map,
and dispatch a "trackunloaded" CustomEvent with detail { id: audioId } so
callers can observe the unload; ensure the method checks for the OPFS-only
delete separation (do not touch OPFS storage) and is no-op if the track is not
present.
| dispose(): void { | ||
| this.transport.stop(); | ||
| this.audioEngine.dispose(); | ||
| if (this.audioContext.state !== "closed") { | ||
| this.audioContext.close(); | ||
| } | ||
| } |
There was a problem hiding this comment.
Don't close caller-provided AudioContext
Line 55 closes whichever AudioContext is attached, even when the caller passed their own instance through DAWConfig.audioContext. If the app shares that context across multiple audio subsystems, disposing this facade will tear the context down and break everything else still using it. Track ownership so we only close contexts we created ourselves.
Apply this diff so consumer-owned contexts stay running:
export class DAW {
private audioEngine: AudioEngine;
private transport: Transport;
private audioContext: AudioContext;
private opfsManager?: OPFSManager;
+ private ownsAudioContext: boolean;
constructor(config: DAWConfig = {}) {
- this.audioContext = config.audioContext || new AudioContext();
+ const providedContext = config.audioContext;
+ if (providedContext) {
+ this.audioContext = providedContext;
+ this.ownsAudioContext = false;
+ } else {
+ this.audioContext = new AudioContext();
+ this.ownsAudioContext = true;
+ }
@@
dispose(): void {
this.transport.stop();
this.audioEngine.dispose();
- if (this.audioContext.state !== "closed") {
- this.audioContext.close();
+ if (this.ownsAudioContext && this.audioContext.state !== "closed") {
+ void this.audioContext.close();
}
}
}Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In packages/daw-sdk/src/core/daw.ts around lines 55 to 61, the dispose method
unconditionally closes the AudioContext passed to the DAW, which can tear down a
caller-owned context; modify the class to track ownership (e.g., set a boolean
flag when you create a new AudioContext in the constructor vs when one is
supplied via DAWConfig) and then change dispose to only call
audioContext.close() when that ownership flag is true and the context state is
not "closed"; ensure transport.stop() and audioEngine.dispose() still run
regardless, and clear or null internal references after successful disposal.
| // Schedule all clips | ||
| for (const clip of clips) { | ||
| this.scheduleClip(clip, fromTime); | ||
| } |
There was a problem hiding this comment.
Handle scheduleClip rejections
scheduleClip() is async and will throw (e.g., AudioEngine.getBufferIterator errors when a track isn’t loaded). Invoking it fire-and-forget on lines 55-58 means those rejections become unhandled promise rejections in the browser, leaving playback silently broken. Please either await the returned promise (e.g., await Promise.all(clips.map(...))) or attach a .catch() to log/handle errors before proceeding.
🤖 Prompt for AI Agents
In packages/daw-sdk/src/core/transport.ts around lines 55-58, the code calls the
async scheduleClip(clip, fromTime) in a fire-and-forget loop which can produce
unhandled promise rejections; change this to await the promises (e.g., await
Promise.all(clips.map(clip => this.scheduleClip(clip, fromTime)))) or at minimum
attach a .catch() to each returned promise to log or handle errors so rejections
are not unhandled and playback failures are surfaced.
| pause(): void { | ||
| if (this.state !== "playing") return; | ||
| this.state = "paused"; | ||
| this.stop(); // For now, pause is same as stop | ||
|
|
||
| this.dispatchEvent( | ||
| new CustomEvent<TransportEvent>("transport", { | ||
| detail: { | ||
| type: "pause", | ||
| state: "paused", | ||
| currentTime: this.getCurrentTime(), | ||
| timestamp: this.getCurrentTime(), | ||
| }, | ||
| }), | ||
| ); | ||
| } |
There was a problem hiding this comment.
Pause drops transport state and time
Line 284 resets the state to 'paused', but calling stop() immediately overwrites it to 'stopped' and clears the scheduling context. As a result, this.getCurrentTime() (lines 294-295) reports the start offset instead of the actual playback position, and listeners see a 'stop' event before 'pause'. This breaks pause/resume semantics. Please capture the current playback time before tearing down, refactor the cleanup so pause() doesn’t emit a stop event, and leave this.state as 'paused' so resume logic has the correct timeline.
🤖 Prompt for AI Agents
In packages/daw-sdk/src/core/transport.ts around lines 284–299, pause currently
sets this.state = "paused" then calls stop() which resets state to "stopped",
clears scheduling context and causes getCurrentTime() to return the start offset
and emit a stop before pause; capture the current playback time before
performing any teardown, avoid calling stop() from pause (or call an internal
cleanup helper that does not change this.state or emit a stop event), set
this.state to "paused" and persist the captured time into the transport's paused
offset/timeline so resume can rebuild the correct timeline, and dispatch the
pause event using the captured currentTime/timestamp; alternatively add a
stop({silent:true})/teardownTransport() variant used by pause that only clears
schedulers without changing state or emitting events so listeners see a single
pause with correct time.
| trimStart: z.number().min(0), | ||
| trimEnd: z.number().min(0), | ||
| sourceDurationMs: z.number().min(0), | ||
| fadeIn: z.number().min(0).max(120_000).optional(), | ||
| fadeOut: z.number().min(0).max(120_000).optional(), | ||
| fadeInCurve: z.number().min(-99).max(99).default(0), | ||
| fadeOutCurve: z.number().min(-99).max(99).default(0), | ||
| loop: z.boolean().optional(), | ||
| loopEnd: z.number().min(0).optional(), | ||
| color: z.string().optional(), | ||
| }); |
There was a problem hiding this comment.
Relax clip trim requirements to avoid breaking existing projects
trimStart and trimEnd are now mandatory, but a lot of our saved clips never persisted those fields—they rely on defaults (see apps/web/components/daw/export/export-preview-lanes.tsx falling back to clip.trimEnd || clip.sourceDurationMs). With the new schema any legacy clip lacking these properties will fail validation, so projects refuse to load. Please keep these fields optional (or supply defaults) so older data passes the Zod check. A minimal fix would be:
- trimStart: z.number().min(0),
- trimEnd: z.number().min(0),
+ trimStart: z.number().min(0).optional(),
+ trimEnd: z.number().min(0).optional(),📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| trimStart: z.number().min(0), | |
| trimEnd: z.number().min(0), | |
| sourceDurationMs: z.number().min(0), | |
| fadeIn: z.number().min(0).max(120_000).optional(), | |
| fadeOut: z.number().min(0).max(120_000).optional(), | |
| fadeInCurve: z.number().min(-99).max(99).default(0), | |
| fadeOutCurve: z.number().min(-99).max(99).default(0), | |
| loop: z.boolean().optional(), | |
| loopEnd: z.number().min(0).optional(), | |
| color: z.string().optional(), | |
| }); | |
| trimStart: z.number().min(0).optional(), | |
| trimEnd: z.number().min(0).optional(), | |
| sourceDurationMs: z.number().min(0), | |
| fadeIn: z.number().min(0).max(120_000).optional(), | |
| fadeOut: z.number().min(0).max(120_000).optional(), | |
| fadeInCurve: z.number().min(-99).max(99).default(0), | |
| fadeOutCurve: z.number().min(-99).max(99).default(0), | |
| loop: z.boolean().optional(), | |
| loopEnd: z.number().min(0).optional(), | |
| color: z.string().optional(), | |
| }); |
🤖 Prompt for AI Agents
packages/daw-sdk/src/types/schemas.ts around lines 55-65: trimStart and trimEnd
are currently required which breaks legacy clips; change them to be optional
(keep the min(0) constraint if present) or provide sensible defaults (e.g.,
trimStart default 0 and trimEnd optional or default 0) so older saved clips
without those fields pass Zod validation; ensure validators still enforce
non-negative values when the fields are present and adjust any dependent types
if necessary.
| ref={canvasRef} | ||
| className="absolute inset-0 pointer-events-none" | ||
| style={{ width, height }} | ||
| /> |
There was a problem hiding this comment.
Bug: Canvas lacks width/height attributes, blurs rendering
Missing width and height attributes on canvas element. The canvas only has style={{ width, height }} which sets CSS dimensions, but the actual canvas bitmap dimensions (width/height attributes) are not set. This causes the canvas to have a default 300x150 pixel bitmap that gets stretched to fit the CSS dimensions, resulting in blurry/distorted rendering. This is the same bug that was fixed in TimelineGridCanvas at line 79-81 where both attributes and style are properly set.
There was a problem hiding this comment.
Actionable comments posted: 1
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
apps/web/components/daw/panels/daw-track-content.tsx(10 hunks)
🧰 Additional context used
📓 Path-based instructions (10)
**/*.{ts,tsx,js,jsx}
📄 CodeRabbit inference engine (AGENTS.md)
**/*.{ts,tsx,js,jsx}: Use absolute imports with the @/ alias
Order imports: React → Next.js → third-party → local → types
**/*.{ts,tsx,js,jsx}: Replace deprecated v.bigint() with v.int64() for signed 64-bit integers
Use v.record() for records; v.map() and v.set() are not supported
Files:
apps/web/components/daw/panels/daw-track-content.tsx
**/*.{ts,tsx}
📄 CodeRabbit inference engine (AGENTS.md)
**/*.{ts,tsx}: Separate type imports usingimport type { ... } from '...'
Define component props with thetypekeyword (e.g.,type ComponentProps = {})
Apply variants via class-variance-authority (cva)
Props type names are descriptive and end withProps(e.g.,MessageProps)
Variant objects/functions use descriptive names (e.g.,messageContentVariants)
Validate component props using TypeScript (types/interfaces)
Trim values to handle trailing spaces from text expansion
Reflect UI state in the URL (filters, tabs, pagination, expanded panels); prefer libraries like nuqs
Back/Forward navigation must restore scroll position
Batch layout reads/writes and avoid unnecessary reflows/repaints
**/*.{ts,tsx}: Use 'as const' for string literals in discriminated unions
Declare arrays with explicit generic: const array: Array = [...]
Declare records with explicit generic: const record: Record<KeyType, ValueType> = {...}
Files:
apps/web/components/daw/panels/daw-track-content.tsx
**/*.tsx
📄 CodeRabbit inference engine (AGENTS.md)
**/*.tsx: Define React components as arrow functions:const Component = () => {}
Export React components as named exports (no default exports)
Use thecn()utility for className merging
Component names use PascalCase (e.g., Message, CodeBlock)
Handle async operations with proper React error boundaries
Manage focus (trap, move, return) per APG patterns
Loading buttons show a spinner and keep the original label
Enter submits focused text inputs; in textarea, Cmd/Ctrl+Enter submits while Enter inserts newline
Keep submit enabled until request starts; then disable, show spinner, and use an idempotency key
Do not block typing; accept free text and validate after input
Allow submitting incomplete forms to surface validation
Show errors inline next to fields; on submit, focus the first error
Use appropriateautocomplete, semanticname, correcttype, andinputmodeon inputs
Disable spellcheck for emails, codes, and usernames where appropriate
Placeholders end with an ellipsis and show example patterns (e.g., phone, API key)
Warn users about unsaved changes before navigation
Ensure compatibility with password managers and 2FA; allow pasting one-time codes
No dead zones on checkboxes/radios; ensure label and control share one generous hit target
Use / for navigation to support Cmd/Ctrl/middle-click
Prefer optimistic UI; reconcile on response; on failure show error and rollback or provide Undo
Confirm destructive actions or provide an Undo window
Use politearia-liveregions for toasts and inline validation
Use an ellipsis character (…) for options that open follow-ups (e.g., “Rename…”)
Set accurate accessible names (aria-label), mark decorative elementsaria-hidden, and verify in the Accessibility Tree
Icon-only buttons require descriptivearia-label
Prefer native semantics (button,a,label,table) before ARIA roles
Files:
apps/web/components/daw/panels/daw-track-content.tsx
**/*.{ts,tsx,js,jsx,css,scss,json,md}
📄 CodeRabbit inference engine (AGENTS.md)
Formatting: 2-space indentation, no semicolons, single quotes, trailing commas in objects/arrays
Files:
apps/web/components/daw/panels/daw-track-content.tsx
**/*.{css,scss,tsx}
📄 CodeRabbit inference engine (AGENTS.md)
**/*.{css,scss,tsx}: During drag, disable text selection and setinerton dragged element/containers
Animations must be interruptible and input-driven; avoid autoplay
Avoid unwanted scrollbars; fix overflows
Files:
apps/web/components/daw/panels/daw-track-content.tsx
**/*.{tsx,html}
📄 CodeRabbit inference engine (AGENTS.md)
<title> must match the current context
Files:
apps/web/components/daw/panels/daw-track-content.tsx
**/*.{tsx,ts,css,scss}
📄 CodeRabbit inference engine (AGENTS.md)
Use the ellipsis character
…(not three periods)
Files:
apps/web/components/daw/panels/daw-track-content.tsx
**/*.{css,scss,tsx,md}
📄 CodeRabbit inference engine (AGENTS.md)
Use
scroll-margin-topon headings for anchored links; include a Skip to content link; maintain hierarchical h1–h6
Files:
apps/web/components/daw/panels/daw-track-content.tsx
**/*.{tsx,md}
📄 CodeRabbit inference engine (AGENTS.md)
Use non-breaking spaces to glue terms (e.g.,
10 MB,⌘ + K,Vercel SDK)
Files:
apps/web/components/daw/panels/daw-track-content.tsx
**/*.{tsx,ts}
📄 CodeRabbit inference engine (AGENTS.md)
Virtualize large lists (e.g., using virtua)
Files:
apps/web/components/daw/panels/daw-track-content.tsx
🧬 Code graph analysis (1)
apps/web/components/daw/panels/daw-track-content.tsx (1)
apps/web/lib/daw-sdk/hooks/use-timebase.ts (1)
useTimebase(6-226)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
- GitHub Check: Vercel Agent Review
- GitHub Check: build
- GitHub Check: build
🔇 Additional comments (2)
apps/web/components/daw/panels/daw-track-content.tsx (2)
291-293: LGTM: Clean migration to SDK snap and time utilities.The migration from grid-size-based snapping to the
snap()function fromuseTimebaseis cleaner and more maintainable. The conditional application (only whentimeline.snapToGridis true) is correct, and thetime.formatDurationusage aligns with the new SDK structure.Also applies to: 332-334, 890-895
541-578: ****The review comment incorrectly identifies a missing automation transfer for same-track moves. The
updateClipAtomautomatically handles clip-bound automation movement: it detects when a clip'sstartTimechanges and shifts all automation points marked with thatclipIdby recalculating their timestamps relative to the new start position. When the same-track move code callsupdateClipwith the newstartTime, this automation handling is triggered automatically. ThecomputeAutomationTransferfunction is specifically for cross-track moves to handle two types of automation: clip-bound automation (always moves with clip) and range-based automation (if the moveAutomation flag is set). For same-track moves, the clip-bound automation follows the clip through the existingupdateClipAtomlogic—no additional transfer is needed.Likely an incorrect or invalid review comment.
- Added width and height attributes to the canvas element in TrackGridLines for proper rendering dimensions. - Refactored the getSnapIntervalMs function in useTimebase to utilize useCallback, improving performance and reducing unnecessary re-renders. - Streamlined snapping logic for time and bars modes, enhancing the accuracy of time snapping based on granularity settings.
| ? point.clipRelativeTime | ||
| : point.clipId | ||
| ? 0 // Clip-bound but missing clipRelativeTime: assume start of clip | ||
| : point.time; // Not clip-bound: use absolute time as-is |
There was a problem hiding this comment.
Bug: Incorrect fallback harms clip-relative timing logic
In resolveClipRelativePoint, when a point is clip-bound but clipRelativeTime is missing, the code assumes time 0 (start of clip). However, the fallback logic is incorrect: if point.clipId exists but clipRelativeTime is undefined, it sets relativeTime = 0. This means point.time (the absolute time) is ignored. The correct behavior should preserve point.time when clipRelativeTime is missing for non-clip-bound points, but for clip-bound points missing clipRelativeTime, it should either use point.time - clipStartTime to derive the relative time, or fail validation. The current implementation silently assumes 0, which could cause automation points to be incorrectly positioned at the start of the clip when they should be at their original absolute time positions.
| ctx.lineTo(x, height); | ||
| } | ||
| if (ms < viewportStart || ms > viewportEnd) continue; | ||
| const x = Math.round(ms * pxPerMs); |
There was a problem hiding this comment.
Grid lines are drawn at absolute pixel coordinates instead of viewport-relative coordinates, causing alignment issues with scroll position.
View Details
📝 Patch Details
diff --git a/apps/web/components/daw/panels/timeline-grid-canvas.tsx b/apps/web/components/daw/panels/timeline-grid-canvas.tsx
index b52e689..8522f36 100644
--- a/apps/web/components/daw/panels/timeline-grid-canvas.tsx
+++ b/apps/web/components/daw/panels/timeline-grid-canvas.tsx
@@ -1,6 +1,7 @@
"use client";
import { useAtom } from "jotai";
import { memo, useLayoutEffect, useMemo, useRef } from "react";
+import { time } from "@wav0/daw-sdk";
import {
cachedTimeGridAtom,
horizontalScrollAtom,
@@ -55,7 +56,7 @@ export const TimelineGridCanvas = memo(function TimelineGridCanvas({
ctx.beginPath();
for (const ms of timeGrid.minors) {
if (ms < viewportStart || ms > viewportEnd) continue;
- const x = Math.round(ms * pxPerMs);
+ const x = Math.round(time.timeToPixel(ms, pxPerMs, scrollLeft));
ctx.moveTo(x, 0);
ctx.lineTo(x, height);
}
@@ -66,7 +67,7 @@ export const TimelineGridCanvas = memo(function TimelineGridCanvas({
ctx.beginPath();
for (const marker of timeGrid.majors) {
if (marker.ms < viewportStart || marker.ms > viewportEnd) continue;
- const x = Math.round(marker.ms * pxPerMs);
+ const x = Math.round(time.timeToPixel(marker.ms, pxPerMs, scrollLeft));
ctx.moveTo(x, 0);
ctx.lineTo(x, height);
}
Analysis
TimelineGridCanvas draws grid lines at absolute coordinates instead of viewport-relative coordinates
What fails: TimelineGridCanvas.useLayoutEffect() uses Math.round(ms * pxPerMs) for grid line positioning, ignoring scroll offset and causing misalignment when timeline is scrolled horizontally
How to reproduce:
- Load timeline with grid markers at various time positions
- Scroll timeline horizontally to any non-zero scroll position
- Observe grid lines appear at wrong pixel coordinates relative to viewport
Result: Grid lines drawn at absolute pixel positions (e.g., 1000ms mark always at 1000px regardless of scroll), causing visual misalignment with time markers and track content
Expected: Grid lines should be positioned using time.timeToPixel(ms, pxPerMs, scrollLeft) to account for scroll offset, matching the pattern used by UnifiedPlayhead component
Evidence: UnifiedPlayhead correctly uses time.timeToPixel() while TimelineGridCanvas, TrackGridLines, and TimelineGridHeader all use the incorrect absolute positioning pattern
- Updated logic to directly use clipRelativeTime if defined. - Improved handling of clip-bound points by deriving relative time from absolute time, enhancing the accuracy of time calculations.
- Refactored raw time calculation to use pixelToTime method for better precision. - Added horizontalScroll dependency to the useCallback hook for accurate state updates during dragging.
DAW SDK: Complete architecture migration and refactor
Complete migration from internal DAW utilities to modular @wav0/daw-sdk and @wav0/daw-react packages.
Note
Adds generation-safe transport and audio scheduling constants, canvas-based timeline/track grids with a unified playhead, and new snap granularity controls; updates SDK/atoms/hooks, removes legacy panels, and refreshes docs/config/deps.
audio-scheduling-constantsand expose via@wav0/daw-sdk/app.Transportwith per-clip state, generation tokens, clip gain chain, fade scheduling, and safe cancelation.PlaybackService: safer automation scheduling, mix refresh during playback, auto-init new tracks, and precise time updates.timeToPixel/pixelToTime) and switch time grid to pixel-viewport + optional snap-aligned generation.snapIntervalMs.UnifiedPlayhead; simplifyUnifiedOverlayto project-end only.TimelineGridCanvas,TimelineGridHeader,TrackGridCanvas/Lines) for smoother rendering.DAWContainer; integrate snap controls in toolbar; refine curve preview.addTrackAtomwith live playback sync; load/import reschedule fixes; playback time precision/throttling.useTransportEventsuses stable refs;useBridgesreturns null-safe bridges.audio-test-panel,automation-transfer-dialog,keyboard-flow-examples, and component README; trim legacy marker/beat code..mcp.json; major README/CONTRIBUTING/AGENTS updates and newCLAUDE.md.next16.0.1,turbo2.6,botid,dotenv-cli); addtestscript; tweaktsconfigexcludes; minor LICENSE update.Written by Cursor Bugbot for commit c1bc450. This will update automatically on new commits. Configure here.
Summary by CodeRabbit
New Features
Bug Fixes
Documentation
Refactor
Chores