feat: harden runtime/pipeline + professional docs overhaul#14
feat: harden runtime/pipeline + professional docs overhaul#14
Conversation
|
Caution Review failedThe pull request is closed. 📝 WalkthroughWalkthroughThis PR updates CI workflows (concurrency, timeouts, lint, full Docker tests, tag validation, frozen lockfiles), rewrites large portions of documentation, adds runtime version validation and helpers, extracts Docker run arg construction, hardens init to avoid overwrites with tests, and enhances build/release scripts to verify artifacts and emit SHA256SUMS. Changes
Sequence DiagramsequenceDiagram
participant Release as Release Script
participant Git as Git Repository
participant Tests as Test Runner
participant Pkg as Package Files
participant Remote as Remote Repo
Release->>Git: ensureCleanWorkingTree()
Git-->>Release: ok | error
Release->>Git: getCurrentBranch()
Git-->>Release: branchName
Release->>Tests: runReleaseChecks() (lint/typecheck/tests)
Tests-->>Release: pass | fail
Release->>Pkg: update package.json version
Pkg-->>Release: updated
Release->>Git: commit & tag
Git-->>Release: committed & tagged
Release->>Remote: push branch & tags
Remote-->>Release: pushed
Release->>Release: build-binaries -> write dist/*.tar.gz
Release->>Release: compute SHA256SUMS -> write dist/SHA256SUMS
Estimated code review effort🎯 4 (Complex) | ⏱️ ~50 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 5
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
scripts/build-binaries.ts (1)
96-96:⚠️ Potential issue | 🔴 CriticalBuild failures silently exit with code 0 — CI will not detect them.
build().catch(console.error)logs the error but the process exits successfully. Whenthrow new Error(...)fires on line 88, CI will report a green build despite compile/archive failures.Proposed fix
-build().catch(console.error); +build().catch((err) => { + console.error(err); + process.exit(1); +});🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scripts/build-binaries.ts` at line 96, The current top-level call build().catch(console.error) swallows failures and leaves the process exit code as 0; change the call-site that invokes build() so errors are logged and the process exits non‑zero (e.g., capture the error from build(), call process.exit(1) after logging). Locate the build() invocation in build-binaries.ts and replace the simple console.error handler with an explicit error handler that logs the error (including stack) and terminates the process with a non-zero exit code so CI will fail on build errors.
🧹 Nitpick comments (10)
packages/cli/src/commands/init.ts (1)
123-133: The throw-inside-try pattern relies onErrorlacking a.codeproperty.The guard works because the hand-thrown
Erroron Line 126 has nocodeproperty (undefined !== 'ENOENT'), so it falls through to the re-throw on Line 130. This is correct but non-obvious — a future change that sets.codeon that error (or switches to aSystemErrorsubclass) would silently swallow the overwrite refusal. Consider restructuring for clarity:♻️ Suggested refactor
for (const filePath of generatedFiles) { - try { - await access(filePath); - throw new Error(`Refusing to overwrite existing file: ${filePath}`); - } catch (err) { - const fileErr = err as NodeJS.ErrnoException; - if (fileErr.code !== 'ENOENT') { - throw err; - } + const exists = await access(filePath).then(() => true, (err) => { + if ((err as NodeJS.ErrnoException).code === 'ENOENT') return false; + throw err; + }); + if (exists) { + throw new Error(`Refusing to overwrite existing file: ${filePath}`); } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/cli/src/commands/init.ts` around lines 123 - 133, The current loop in init.ts throws an Error inside the try block which is caught and inspected by checking err.code, making the overwrite-guard fragile; instead, call await access(filePath) inside a try/catch only to detect "does not exist": wrap only the access call in try { await access(filePath); /* access succeeded */ } catch (err) { const fileErr = err as NodeJS.ErrnoException; if (fileErr.code === 'ENOENT') continue; throw err; } and after the successful access (i.e., no error thrown) explicitly throw the refusal Error (`Refusing to overwrite existing file: ${filePath}`); reference the generatedFiles loop and the access call/NodeJS.ErrnoException/'ENOENT' check when making the change.packages/cli/src/__tests__/init.test.ts (1)
23-32: Test 1 is vulnerable to silent process.exit — any error ininitCommandkills the test runner without a clear failure.If
initCommandencounters an error anywhere in its try block (lines 114–145 in init.ts) — whether fromgetRuntimeConfig,mkdir,writeFile, the file-existence check, or any other operation —process.exit(1)fires at line 148 without a spy to intercept it. This silently terminates the test runner instead of producing a test failure. Test 2 already solves this with a spy on line 39 — apply it globally inbeforeEachfor consistency and robustness across all tests.♻️ Suggested refactor — move the spy into beforeEach
beforeEach(async () => { originalCwd = process.cwd(); sandboxDir = await mkdtemp(join(tmpdir(), 'ignite-init-')); process.chdir(sandboxDir); + jest.spyOn(process, 'exit').mockImplementation((code?: string | number | null) => { + throw new Error(`process.exit:${code ?? 0}`); + }); });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/cli/src/__tests__/init.test.ts` around lines 23 - 32, The test can be killed silently by process.exit calls from initCommand; add a global spy on process.exit in the test suite setup so any process.exit(1) invoked by initCommand fails the test instead of terminating the runner. In the test file's beforeEach, stub/spy process.exit (the same approach used in the second test) and restore it in afterEach, ensuring initCommand remains unchanged; reference initCommand and the process.exit call that originates in init.ts so the spy covers all tests.packages/core/src/runtime/runtime-registry.ts (1)
90-93:getSupportedVersionsreturns[]for both unknown runtimes and unconstrained ones.A caller can't distinguish "runtime doesn't exist" from "runtime accepts any version." If this becomes a public-facing API, consider returning
nullor throwing for unknown runtimes to make the distinction clear.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/core/src/runtime/runtime-registry.ts` around lines 90 - 93, The current getSupportedVersions function masks "unknown runtime" by returning [] for both non-existent runtimes and runtimes that accept any version; update getSupportedVersions to detect when getRuntimePlugin(runtimeName) returns undefined and return null (or throw) to distinguish that case from an unconstrained runtime, while preserving the existing behavior of returning plugin.supportedVersions (or [] only when the plugin explicitly defines it as such); reference getSupportedVersions, getRuntimePlugin, and the supportedVersions property when implementing this check.packages/core/src/__tests__/docker-runtime.test.ts (1)
4-20: Test covers the critical--rmomission, but coverage is thin for the rest ofbuildDockerRunArgs.Consider adding cases for security options, volumes, env vars, and the
commandpassthrough. These branches have non-trivial logic (e.g., tmpfs formatting, volume mode) that would benefit from explicit assertions, especially since this is now a standalone public helper.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/core/src/__tests__/docker-runtime.test.ts` around lines 4 - 20, Add unit tests for buildDockerRunArgs that exercise its non-trivial branches: add cases asserting securityOpt flags appear when provided (e.g., securityOpt), tmpfs entries are formatted correctly, bind volumes include the correct host:container[:mode] strings and tmpfs mounts use size and mode, environment variables are rendered as -e KEY=VALUE entries, and that a passed-through command array appends after the image name; reference the buildDockerRunArgs helper and assert presence/ordering of 'run', imageName, '--name', securityOpt, tmpfs formatted strings, volume mode suffixes, '-e' entries, and the command tokens to ensure each branch is covered.agent.md (1)
1-59: Removeagent.mdor replace with a redirect toAGENTS.md.This file is identical to
AGENTS.md, creating unnecessary duplication. The heading# AGENTS.mdinsideagent.mdfurther adds confusion. Maintain a single source of truth by either removing this file or replacing its content with a reference to the canonicalAGENTS.md.Option: replace agent.md with a redirect
-# AGENTS.md - -This guide tells coding agents how to work safely and effectively in this repository. - -## Core Rule -... +# Agent Guidelines + +See [AGENTS.md](./AGENTS.md) for the full coding-agent guide.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@agent.md` around lines 1 - 59, The file agent.md duplicates AGENTS.md (it even contains the heading "# AGENTS.md"); remove agent.md or replace its contents with a one-line redirect pointing to AGENTS.md (e.g., "See AGENTS.md for the canonical guide") so there is a single source of truth; ensure any references to agent.md in documentation or scripts are updated to point to AGENTS.md and keep the canonical AGENTS.md unchanged..github/workflows/ci.yml (1)
64-71: Preferjqorbunovernodefor readingpackage.jsonversion.
node -p "require('./package.json').version"is functional (Node.js is present onubuntu-latest), but is inconsistent with the bun-first pipeline. Either of these is cleaner:♻️ Alternative approaches
- PKG_VERSION="$(node -p "require('./package.json').version")" + PKG_VERSION="$(jq -r '.version' package.json)"or, keeping it fully bun:
- PKG_VERSION="$(node -p "require('./package.json').version")" + PKG_VERSION="$(bun -e "console.log(JSON.parse(require('fs').readFileSync('package.json','utf8')).version)")"
jqis the simplest and available on all GitHub-hosted Ubuntu runners.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/ci.yml around lines 64 - 71, The workflow currently uses node -p "require('./package.json').version" to set PKG_VERSION which is inconsistent with the bun-first pipeline; replace that command with a jq (or bun) invocation so TAG_VERSION and PKG_VERSION are read consistently — e.g. set PKG_VERSION="$(jq -r '.version' package.json)" (or use bun if you prefer) in the Verify tag matches package version step and keep the existing TAG_VERSION comparison logic unchanged.README.md (1)
117-122:sha256sumis Linux-only; add a macOS note for completeness.
sha256sum -c SHA256SUMSis the GNU coreutils command. On macOS the equivalent isshasum -a 256 -c SHA256SUMS. Given the tool's Linux/Docker focus this is minor, but it's a common stumbling block for macOS contributors verifying binaries.♻️ Suggested update
```bash cd dist -sha256sum -c SHA256SUMS +# Linux +sha256sum -c SHA256SUMS +# macOS +shasum -a 256 -c SHA256SUMS🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@README.md` around lines 117 - 122, Update the "Verify Release Artifacts" section so it documents both Linux and macOS commands: keep the existing Linux command (`sha256sum -c SHA256SUMS`) and add the macOS equivalent (`shasum -a 256 -c SHA256SUMS`) with a short label or comment indicating which platform each command is for; locate this in the "Verify Release Artifacts" heading where `sha256sum -c SHA256SUMS` currently appears and add the macOS variant and a one-line platform note nearby.scripts/release.ts (1)
24-29: Ifbun run lintauto-fixes files, those fixes won't be included in the release commit.
runReleaseChecksruns after user confirmation but beforeupdatePackageVersion. If the lint script is configured with auto-fix, the working tree becomes dirty. The subsequentgit addat line 167 only stagespackage.jsonand related files — lint-fixed files would be left uncommitted, meaning the release tag points to code that differs from the post-lint state.Adding a clean-tree re-check after
runReleaseCheckscatches this case:♻️ Proposed fix
await runReleaseChecks(); + await ensureCleanWorkingTree(); // guard against lint auto-fix side effects console.log("\n📝 Updating package.json...");Also applies to: 161-161
docs/api.md (2)
196-205: Consider noting whyserviceNameis duplicated inside the nestedpreflightobject.Both the outer envelope and the nested
preflightobject carryserviceName. IfPreflightResultin@ignite/sharedis a self-contained type (used standalone by CLI too), the duplication is intentional — but documenting that here prevents consumer confusion about which field to use.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/api.md` around lines 196 - 205, Add a short clarifying note in the docs next to the JSON example explaining that the top-level "serviceName" is the envelope service identifier while the nested "preflight.serviceName" comes from the PreflightResult type (from `@ignite/shared`) and is intentionally duplicated because PreflightResult is a self-contained type used independently (e.g., by the CLI); reference the symbols "serviceName", "preflight", and "PreflightResult" so readers know which field to use in envelope vs standalone contexts.
59-61: Clarify--skip-preflightto prevent misread.The current note is accurate but terse. Because the flag name implies preflight is skipped entirely, users commonly misread it. Consider rewording to make the semantics explicit:
✏️ Suggested rewording
-`--skip-preflight` bypasses fail-fast blocking, but preflight still runs and is included in reporting. +`--skip-preflight` does not skip preflight execution; it bypasses fail-fast blocking on preflight failure. Preflight still runs and its result is included in reporting.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/api.md` around lines 59 - 61, The note about --skip-preflight is too terse and can be misread; reword the Behavior note so it explicitly states that the flag prevents fail-fast blocking but does not prevent preflight checks from executing or being included in reports (e.g., "Using --skip-preflight disables fail‑fast blocking so failures won't abort the run early, but preflight checks still execute and their results are recorded in the final report"). Update the line referencing `--skip-preflight` to this clearer phrasing and keep the emphasis that execution still occurs and results are included in reporting.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.github/workflows/ci.yml:
- Around line 40-44: CI currently runs unit tests twice: the job step named "Run
unit tests" executes "bun run test:unit" and later the step "Run full test suite
(Docker)" executes "bun run test", which re-runs unit tests because the full
suite includes them; decide whether you want fail-fast or a single comprehensive
run and update the workflow accordingly—either remove the "Run unit tests" step
and keep "Run full test suite (Docker)" (bun run test) for a single run, or keep
both but change "Run full test suite (Docker)" to a docker-only command that
excludes unit tests (or modify the "test" script to not include unit tests) so
that "bun run test:unit" and "bun run test" are not redundant.
In `@docs/api.md`:
- Around line 294-310: The defaults in the docs violate the validation in
load-service.ts for the memory thresholds: swap the documented defaults so
failRatio is 1 and warnRatio is 0.8 (i.e., change the preflight.memory defaults
from warnRatio: 1 / failRatio: 0.8 to warnRatio: 0.8 / failRatio: 1) so they
pass the existing validation and match the threshold semantics used in
analyze-memory.ts; alternatively, if you prefer keeping the current doc values,
update the validation in load-service.ts (the warnRatio/failRatio check) to
allow the documented ordering instead.
In `@docs/threat-model.md`:
- Around line 87-89: The "Responsible Disclosure" section currently tells
reporters to report privately but provides no channel; update the Responsible
Disclosure paragraph to include an actionable contact mechanism such as a GitHub
security advisory link (e.g., "Use GitHub Security Advisories:
https://github.com/OWNER/REPO/security/advisories"), an explicit security
contact email (e.g., security@yourdomain.com), and optionally a PGP key or
statement about accepted encryption and expected response time so reporters have
a clear, private way to submit vulnerabilities; edit the "Responsible
Disclosure" heading content accordingly.
In `@README.md`:
- Around line 21-26: Replace the unhyphenated phrase "AI agent generated code"
with the compound-modifier form "AI agent-generated code" in the README text
block so the modifier preceding the noun is correctly hyphenated; locate the
exact string "AI agent generated code" in the list and update it to "AI
agent-generated code".
In `@scripts/release.ts`:
- Around line 9-14: The check in ensureCleanWorkingTree treats untracked files
as dirty because git status --porcelain returns lines prefixed with "??"; update
ensureCleanWorkingTree to ignore untracked entries by filtering out lines that
start with "??" when computing the status (so only staged/modified/deleted files
trigger the error), and if you prefer the other approach instead, keep the
current check but change the thrown Error text to recommend including untracked
files (e.g., "git stash --include-untracked" or using "git clean -fd") so the
guidance matches the behavior.
---
Outside diff comments:
In `@scripts/build-binaries.ts`:
- Line 96: The current top-level call build().catch(console.error) swallows
failures and leaves the process exit code as 0; change the call-site that
invokes build() so errors are logged and the process exits non‑zero (e.g.,
capture the error from build(), call process.exit(1) after logging). Locate the
build() invocation in build-binaries.ts and replace the simple console.error
handler with an explicit error handler that logs the error (including stack) and
terminates the process with a non-zero exit code so CI will fail on build
errors.
---
Nitpick comments:
In @.github/workflows/ci.yml:
- Around line 64-71: The workflow currently uses node -p
"require('./package.json').version" to set PKG_VERSION which is inconsistent
with the bun-first pipeline; replace that command with a jq (or bun) invocation
so TAG_VERSION and PKG_VERSION are read consistently — e.g. set
PKG_VERSION="$(jq -r '.version' package.json)" (or use bun if you prefer) in the
Verify tag matches package version step and keep the existing TAG_VERSION
comparison logic unchanged.
In `@agent.md`:
- Around line 1-59: The file agent.md duplicates AGENTS.md (it even contains the
heading "# AGENTS.md"); remove agent.md or replace its contents with a one-line
redirect pointing to AGENTS.md (e.g., "See AGENTS.md for the canonical guide")
so there is a single source of truth; ensure any references to agent.md in
documentation or scripts are updated to point to AGENTS.md and keep the
canonical AGENTS.md unchanged.
In `@docs/api.md`:
- Around line 196-205: Add a short clarifying note in the docs next to the JSON
example explaining that the top-level "serviceName" is the envelope service
identifier while the nested "preflight.serviceName" comes from the
PreflightResult type (from `@ignite/shared`) and is intentionally duplicated
because PreflightResult is a self-contained type used independently (e.g., by
the CLI); reference the symbols "serviceName", "preflight", and
"PreflightResult" so readers know which field to use in envelope vs standalone
contexts.
- Around line 59-61: The note about --skip-preflight is too terse and can be
misread; reword the Behavior note so it explicitly states that the flag prevents
fail-fast blocking but does not prevent preflight checks from executing or being
included in reports (e.g., "Using --skip-preflight disables fail‑fast blocking
so failures won't abort the run early, but preflight checks still execute and
their results are recorded in the final report"). Update the line referencing
`--skip-preflight` to this clearer phrasing and keep the emphasis that execution
still occurs and results are included in reporting.
In `@packages/cli/src/__tests__/init.test.ts`:
- Around line 23-32: The test can be killed silently by process.exit calls from
initCommand; add a global spy on process.exit in the test suite setup so any
process.exit(1) invoked by initCommand fails the test instead of terminating the
runner. In the test file's beforeEach, stub/spy process.exit (the same approach
used in the second test) and restore it in afterEach, ensuring initCommand
remains unchanged; reference initCommand and the process.exit call that
originates in init.ts so the spy covers all tests.
In `@packages/cli/src/commands/init.ts`:
- Around line 123-133: The current loop in init.ts throws an Error inside the
try block which is caught and inspected by checking err.code, making the
overwrite-guard fragile; instead, call await access(filePath) inside a try/catch
only to detect "does not exist": wrap only the access call in try { await
access(filePath); /* access succeeded */ } catch (err) { const fileErr = err as
NodeJS.ErrnoException; if (fileErr.code === 'ENOENT') continue; throw err; } and
after the successful access (i.e., no error thrown) explicitly throw the refusal
Error (`Refusing to overwrite existing file: ${filePath}`); reference the
generatedFiles loop and the access call/NodeJS.ErrnoException/'ENOENT' check
when making the change.
In `@packages/core/src/__tests__/docker-runtime.test.ts`:
- Around line 4-20: Add unit tests for buildDockerRunArgs that exercise its
non-trivial branches: add cases asserting securityOpt flags appear when provided
(e.g., securityOpt), tmpfs entries are formatted correctly, bind volumes include
the correct host:container[:mode] strings and tmpfs mounts use size and mode,
environment variables are rendered as -e KEY=VALUE entries, and that a
passed-through command array appends after the image name; reference the
buildDockerRunArgs helper and assert presence/ordering of 'run', imageName,
'--name', securityOpt, tmpfs formatted strings, volume mode suffixes, '-e'
entries, and the command tokens to ensure each branch is covered.
In `@packages/core/src/runtime/runtime-registry.ts`:
- Around line 90-93: The current getSupportedVersions function masks "unknown
runtime" by returning [] for both non-existent runtimes and runtimes that accept
any version; update getSupportedVersions to detect when
getRuntimePlugin(runtimeName) returns undefined and return null (or throw) to
distinguish that case from an unconstrained runtime, while preserving the
existing behavior of returning plugin.supportedVersions (or [] only when the
plugin explicitly defines it as such); reference getSupportedVersions,
getRuntimePlugin, and the supportedVersions property when implementing this
check.
In `@README.md`:
- Around line 117-122: Update the "Verify Release Artifacts" section so it
documents both Linux and macOS commands: keep the existing Linux command
(`sha256sum -c SHA256SUMS`) and add the macOS equivalent (`shasum -a 256 -c
SHA256SUMS`) with a short label or comment indicating which platform each
command is for; locate this in the "Verify Release Artifacts" heading where
`sha256sum -c SHA256SUMS` currently appears and add the macOS variant and a
one-line platform note nearby.
| - name: Run unit tests | ||
| run: bun run test:unit | ||
|
|
||
| - name: Run full test suite (Docker) | ||
| run: bun run test |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Find and display test script definitions in package.json
find . -name "package.json" -type f | head -20 | while read file; do
echo "=== $file ==="
jq -r '.scripts | select(. != null) | to_entries[] | select(.key | test("^test")) | "\(.key): \(.value)"' "$file" 2>/dev/null || true
doneRepository: dev-dami/ignite
Length of output: 3794
Unit tests execute twice in the CI pipeline.
bun run test:unit (line 41) excludes docker-execution and server tests, while bun run test (line 44) runs the full test suite, which includes unit tests. If this is a fail-fast pattern (stop early on unit test failure), it makes sense; otherwise, unit tests are re-executed unnecessarily, increasing CI time. Consider whether the redundancy is intentional or should be consolidated into a single command that fails fast with appropriate test separation.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.github/workflows/ci.yml around lines 40 - 44, CI currently runs unit tests
twice: the job step named "Run unit tests" executes "bun run test:unit" and
later the step "Run full test suite (Docker)" executes "bun run test", which
re-runs unit tests because the full suite includes them; decide whether you want
fail-fast or a single comprehensive run and update the workflow
accordingly—either remove the "Run unit tests" step and keep "Run full test
suite (Docker)" (bun run test) for a single run, or keep both but change "Run
full test suite (Docker)" to a docker-only command that excludes unit tests (or
modify the "test" script to not include unit tests) so that "bun run test:unit"
and "bun run test" are not redundant.
|
|
||
| process: | ||
| allowSpawn: false # Block spawning child processes | ||
| allowedCommands: [] # Optional: allow specific commands | ||
| preflight: | ||
| memory: | ||
| baseMb: 50 | ||
| perDependencyMb: 2 | ||
| warnRatio: 1 | ||
| failRatio: 0.8 | ||
| dependencies: | ||
| warnCount: 100 | ||
| infoCount: 50 | ||
| image: | ||
| warnMb: 500 | ||
| failMb: 1000 | ||
| timeout: | ||
| minMs: 100 | ||
| maxMs: 30000 | ||
| coldStartBufferMs: 500 |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
rg -n "warnRatio|failRatio" --type=ts -C 5Repository: dev-dami/ignite
Length of output: 3885
🏁 Script executed:
rg -n "DEFAULT_WARN_RATIO|DEFAULT_FAIL_RATIO" --type=ts -C 2Repository: dev-dami/ignite
Length of output: 1267
🏁 Script executed:
cat -n packages/core/src/preflight/analyze-memory.tsRepository: dev-dami/ignite
Length of output: 3754
Fix validation constraint and correct defaults: warnRatio must be less restrictive than failRatio.
The validation rule in load-service.ts (line 134) requires warnRatio < failRatio, but the defaults are warnRatio: 1 and failRatio: 0.8 (which violate this constraint). The logic in analyze-memory.ts treats ratios as threshold multipliers where higher ratios = stricter checks, making warnRatio < failRatio backwards. The documented example cannot be used as-is because it fails validation. Either:
- Fix the validation to allow
warnRatio >= failRatio, or - Swap defaults to
failRatio: 1andwarnRatio: 0.8
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@docs/api.md` around lines 294 - 310, The defaults in the docs violate the
validation in load-service.ts for the memory thresholds: swap the documented
defaults so failRatio is 1 and warnRatio is 0.8 (i.e., change the
preflight.memory defaults from warnRatio: 1 / failRatio: 0.8 to warnRatio: 0.8 /
failRatio: 1) so they pass the existing validation and match the threshold
semantics used in analyze-memory.ts; alternatively, if you prefer keeping the
current doc values, update the validation in load-service.ts (the
warnRatio/failRatio check) to allow the documented ordering instead.
| ## Responsible Disclosure | ||
|
|
||
| - [Docker Security Best Practices](https://docs.docker.com/engine/security/) | ||
| - [Linux Namespaces](https://man7.org/linux/man-pages/man7/namespaces.7.html) | ||
| - [Seccomp](https://man7.org/linux/man-pages/man2/seccomp.2.html) | ||
| - [gVisor](https://gvisor.dev/) | ||
| - [Kata Containers](https://katacontainers.io/) | ||
| Please report vulnerabilities privately to the maintainer instead of opening public issues. |
There was a problem hiding this comment.
Responsible disclosure section lacks a contact mechanism.
"Report vulnerabilities privately to the maintainer" is not actionable without a method. Reporters who can't find a private channel will default to a public issue.
Consider adding a GitHub security advisory link, which GitHub provides for free:
-Please report vulnerabilities privately to the maintainer instead of opening public issues.
+Please report vulnerabilities via [GitHub Security Advisories](https://github.com/dev-dami/ignite/security/advisories/new) instead of opening public issues.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@docs/threat-model.md` around lines 87 - 89, The "Responsible Disclosure"
section currently tells reporters to report privately but provides no channel;
update the Responsible Disclosure paragraph to include an actionable contact
mechanism such as a GitHub security advisory link (e.g., "Use GitHub Security
Advisories: https://github.com/OWNER/REPO/security/advisories"), an explicit
security contact email (e.g., security@yourdomain.com), and optionally a PGP key
or statement about accepted encryption and expected response time so reporters
have a clear, private way to submit vulnerabilities; edit the "Responsible
Disclosure" heading content accordingly.
| Ignite runs JavaScript/TypeScript code inside isolated Docker containers with optional hardened audit mode. It is designed for systems that execute code you do not fully trust: | ||
|
|
||
| <img src="https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/rainbow.png" alt="rainbow" width="100%"> | ||
| - AI agent generated code | ||
| - plugin or extension ecosystems | ||
| - user submissions and sandboxed automation | ||
| - security-sensitive CI checks |
There was a problem hiding this comment.
Hyphenate the compound modifier "AI agent-generated".
When a compound modifier precedes a noun, it should be hyphenated. This is flagged by the LanguageTool static analysis.
-AI agent generated code
+AI agent-generated code📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| Ignite runs JavaScript/TypeScript code inside isolated Docker containers with optional hardened audit mode. It is designed for systems that execute code you do not fully trust: | |
| <img src="https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/rainbow.png" alt="rainbow" width="100%"> | |
| - AI agent generated code | |
| - plugin or extension ecosystems | |
| - user submissions and sandboxed automation | |
| - security-sensitive CI checks | |
| Ignite runs JavaScript/TypeScript code inside isolated Docker containers with optional hardened audit mode. It is designed for systems that execute code you do not fully trust: | |
| - AI agent-generated code | |
| - plugin or extension ecosystems | |
| - user submissions and sandboxed automation | |
| - security-sensitive CI checks |
🧰 Tools
🪛 LanguageTool
[grammar] ~23-~23: Use a hyphen to join words.
Context: ...ecute code you do not fully trust: - AI agent generated code - plugin or extensi...
(QB_NEW_EN_HYPHEN)
[grammar] ~23-~23: Use a hyphen to join words.
Context: ...code you do not fully trust: - AI agent generated code - plugin or extension eco...
(QB_NEW_EN_HYPHEN)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@README.md` around lines 21 - 26, Replace the unhyphenated phrase "AI agent
generated code" with the compound-modifier form "AI agent-generated code" in the
README text block so the modifier preceding the noun is correctly hyphenated;
locate the exact string "AI agent generated code" in the list and update it to
"AI agent-generated code".
| async function ensureCleanWorkingTree(): Promise<void> { | ||
| const status = (await $`git status --porcelain`.text()).trim(); | ||
| if (status.length > 0) { | ||
| throw new Error("Working tree is not clean. Commit or stash changes before releasing."); | ||
| } | ||
| } |
There was a problem hiding this comment.
git status --porcelain includes untracked files; error message recommends stash which doesn't cover them.
git stash (without -u/--include-untracked) silently skips untracked files. A developer who has untracked files (build artifacts, temp outputs) will follow the error message, run git stash, and hit the same failure again.
Two clean fixes:
🐛 Option A — ignore untracked files (simpler, matches intent)
- const status = (await $`git status --porcelain`.text()).trim();
+ const status = (await $`git status --porcelain --untracked-files=no`.text()).trim();
if (status.length > 0) {
throw new Error("Working tree is not clean. Commit or stash changes before releasing.");
}🐛 Option B — keep the check, fix the message
- throw new Error("Working tree is not clean. Commit or stash changes before releasing.");
+ throw new Error("Working tree is not clean. Commit or stash changes (use 'git stash -u' to include untracked files) before releasing.");📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| async function ensureCleanWorkingTree(): Promise<void> { | |
| const status = (await $`git status --porcelain`.text()).trim(); | |
| if (status.length > 0) { | |
| throw new Error("Working tree is not clean. Commit or stash changes before releasing."); | |
| } | |
| } | |
| async function ensureCleanWorkingTree(): Promise<void> { | |
| const status = (await $`git status --porcelain --untracked-files=no`.text()).trim(); | |
| if (status.length > 0) { | |
| throw new Error("Working tree is not clean. Commit or stash changes before releasing."); | |
| } | |
| } |
| async function ensureCleanWorkingTree(): Promise<void> { | |
| const status = (await $`git status --porcelain`.text()).trim(); | |
| if (status.length > 0) { | |
| throw new Error("Working tree is not clean. Commit or stash changes before releasing."); | |
| } | |
| } | |
| async function ensureCleanWorkingTree(): Promise<void> { | |
| const status = (await $`git status --porcelain`.text()).trim(); | |
| if (status.length > 0) { | |
| throw new Error("Working tree is not clean. Commit or stash changes (use 'git stash -u' to include untracked files) before releasing."); | |
| } | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@scripts/release.ts` around lines 9 - 14, The check in ensureCleanWorkingTree
treats untracked files as dirty because git status --porcelain returns lines
prefixed with "??"; update ensureCleanWorkingTree to ignore untracked entries by
filtering out lines that start with "??" when computing the status (so only
staged/modified/deleted files trigger the error), and if you prefer the other
approach instead, keep the current check but change the thrown Error text to
recommend including untracked files (e.g., "git stash --include-untracked" or
using "git clean -fd") so the guidance matches the behavior.
Summary
Verification
Summary by CodeRabbit
Documentation
Chores
Tests
Bug Fixes