feat: Blender 5.x API fixes + planner edit awareness#20
Conversation
- Created scripts/generate-training-data.ts (parses 125 RAG scripts into instruction-output pairs) - Generated 269 training pairs (125 full-script + 144 function-level) in training/training_data.jsonl - Created training/eval_prompts.json (50 held-out test prompts across all categories) - Created training/train_blender_codegen.py (QLoRA 4-bit NF4 for Azure A100, targets Qwen3-8B) - New RAG script: displacement_textures.py (raked sand, water ripples, rocky terrain, modifier displacement) - Re-ingested 125 scripts into pgvector
- New: hdri_lighting.py (HDRI environment, gradient sky, solid color world) - New: photorealistic_materials.py (image-based PBR + procedural stone/wood) - Updated: blender_api_pitfalls.py (+pitfall 16: noise_scale, +pitfall 17: camera X rotation) - Re-ingested 127 scripts into pgvector - Regenerated training data: 281 pairs (up from 269)
- New: interior_rooms.py (create_room, wall_with_doorway, wall_with_window) - Real-world scale reference table for interiors - Re-ingested 128 scripts into pgvector - Training data regenerated (286 pairs)
…back loop - Import suggestImprovements from lib/ai/vision.ts into executor - Insert 140-line correction loop between viewport switch (step 4) and audit (step 5) - Capture viewport screenshot -> Gemini Vision analyzes -> generate correction code - Only correct high-priority issues (agent decides based on confidence) - Max 2 iterations, entirely non-fatal on any error - Add AgentVisualAnalysis and AgentVisualCorrection stream event types - Enable enableVisualFeedback: true in chat route by default
…erHub - Scraped 5 BlenderHub pages (top 50, free, geo nodes, animation, rendering) - Cataloged built-in addons (zero install), free community addons, paid addons - Documented Python API access patterns (bpy.ops.addon_name) - Added 3-phase integration strategy (built-in → free → paid) - Sources: BlenderHub, DuetPBR
…ation architecture
…p, open-source model decision
- auto_retopology.py: Voxel/Quadriflow remesh, decimation, mesh repair - auto_rigify.py: Rigify metarig templates, rig generation, auto weights - auto_uv_unwrap.py: Shape-based auto UV, lightmap, texel density - procedural_animation.py: Orbit, wave, pendulum, spring, NLA, dolly zoom - pbr_texture_loader.py: PBR map loader, folder discovery, baking - model_export.py: LOD chain, Game/VFX/Web/Print presets, USD, validation - prompts.ts: Added PRODUCTION PIPELINE hints to CODE_GENERATION_PROMPT - Re-ingested 134 scripts into pgvector
New lib/neural/ module with 5 provider clients: - Hunyuan Shape 2.1 (geometry, self-hosted, 10GB VRAM) - Hunyuan Paint 2.1 (PBR texturing, self-hosted, 21GB VRAM) - Hunyuan Part (mesh segmentation, Gradio/HF) - TRELLIS 2 (Microsoft, MIT, geometry+PBR, 24GB VRAM) - YVO3D (premium texturing API, up to ULTIMA 8K) Core: types.ts, base-client.ts, registry.ts, index.ts Hybrid pipeline: 8-stage orchestrator (neural→Blender→export) RAG: import_neural_mesh.py (import, cleanup, normalize, decimate, UV, PBR) Prompts: neural vs procedural decision rules Stream events: AgentNeuralGeneration, AgentHybridPipeline
…ural/hybrid - New strategy-types.ts: Strategy, StrategyDecision, StrategyOverride types - New strategy-router.ts: Two-phase classification (keyword patterns + LLM fallback) - 7 procedural, 5 neural, 4 hybrid regex pattern groups - Gemini JSON-mode LLM fallback for ambiguous requests - User override support for manual strategy selection - types.ts: AgentStrategyClassification stream event, strategyDecision in PlanningMetadata - planner.ts: Injects neural context when strategy is neural/hybrid - executor.ts: Accepts strategyDecision in ExecutionOptions - route.ts: Classifies between scene snapshot and planning, emits strategy event - tsc --noEmit: 0 errors
- New workflow-types.ts: WorkflowStep, WorkflowProposal, step actions/statuses - New workflow-advisor.ts: LLM per-step tool recommendations + static fallback - New workflow-step/route.ts: per-step execution API (neural/blender/skip/manual) - New workflow-panel.tsx: step cards, Execute/Manual/Skip buttons, progress bar - New mock-neural-server.ts: returns valid minimal GLB for testing - Chat route: neural/hybrid requests generate WorkflowProposal instead of auto-executing - Stream events: AgentWorkflowProposal, AgentWorkflowStepUpdate - project-chat.tsx: handles workflow_proposal event, renders WorkflowPanel - TypeScript: tsc --noEmit passes with 0 errors
…ender 4.x The prompt previously told the LLM to SKIP use_nodes because it was 'deprecated in Blender 5.x'. On Blender 4.x, this causes mat.node_tree to be None, crashing every material creation step. Fixed: pattern now always calls mat.use_nodes = True (harmless no-op on 5.x, essential on 4.x). Removed the incorrect AVOID entry.
…gging 1. Step result log entries now include logType: 'execute' instead of undefined 2. Screenshot extraction tries 3 data paths (result.image, top-level, double-nested) 3. Logs full response shape when screenshot capture fails for easier debugging
- AbortController with 60s stream stall timeout detection - Classify errors as retryable (network/timeout/abort) vs non-retryable - Keep partial progress on retryable errors instead of removing assistant msg - handleRetry() re-sends the saved payload - Retry button in error display with destructive outline styling - Clear retry state in handleStartNew
1. Remove width/height/format from get_viewport_screenshot — addon returned 'unexpected keyword argument width' error 2. Add planning rule 10: never delete existing lights when editing a scene unless user explicitly asks — prevents black rendered view 3. Fix in both executor.ts and screenshot.ts
- Trim BLENDER_SYSTEM_PROMPT to essential identity (target: Blender 5.0+) - Planning rule 5: distinguish NEW SCENE vs EDIT SCENE flows - Blender API section: target 5.x only, remove version compat noise - Remove dead BLENDER_FEW_SHOT_EXAMPLES (never imported, RAG replaces) - Condense boolean operations section - Fix screenshot params (no width/height/format — addon rejects them) - Add planning rule 10: preserve lights when editing scenes
Shrine test step 6 crashed on 'Subsurface Color' which doesn't exist in Blender 5.x. Added all 5 renamed/removed sockets to the AVOID section with explicit 'will crash' warning for stronger LLM signal.
1. Addon get_viewport_screenshot: when no filepath given, now uses
temp file, captures viewport, encodes as base64, returns inline.
Root cause: 'if not filepath: return {error: No filepath}' — we
never sent a filepath.
2. Added shadow_method/shadow_mode to AVOID section — fog layer step
crashed 3x in edit shrine test on these non-existent attributes.
3. Synced addon to public/downloads.
1. Stream stall timeout: 60s -> 180s (deep thinking models need more) 2. Rewrote eevee_setup.py for Blender 5.x — removed use_ssr, use_gtao, use_bloom, taa_render_samples, shadow_cascade_size; replaced bloom with compositor Glare node approach 3. Fixed toon_setup.py: added use_nodes=True, removed deprecated attrs 4. Added EEVEE SSR attributes to AVOID section in prompts.ts 5. Re-ingested all scripts into pgvector
1. Added planning rule 11: OBJECT GROUNDING — objects on floor Z=0, wall-mounted objects flush with wall surface, explicit coordinates 2. Added planning rule 12: LIGHTING ENERGY — minimum 1000W point lights for dark indoor scenes, 500W+ fill lights 3. Increased code gen light energy guidance (500W -> 1000W for key) 4. Fixed missing 'import os' in addon get_viewport_screenshot
…cript Added sections 19-21 to api_version_compatibility.py: - 19: blend_method valid values (OPAQUE/CLIP/HASHED/BLEND), shadow_method removed - 20: All removed EEVEE 5.x properties (SSR, bloom, GTAO, shadow_cascade) - 21: create_transparent_material() helper function Re-ingested all scripts into pgvector.
…ON, stronger rules 1. executor.ts: Capture get_scene_info/get_all_object_info results and inject structured object list (name, type, location, dimensions) into every generateCode() call as 'Current Scene Objects' context 2. route.ts: Scene snapshot now returns structured JSON instead of formatted string. Object cap increased 12 -> 30. 3. prompts.ts: Edit rule now mandates referencing existing objects by exact name + coordinates, never recreating objects that exist
|
Caution Review failedFailed to post review comments 📝 WalkthroughWalkthroughThis PR introduces a comprehensive 3D generation pipeline with neural provider integration, strategy-based routing, and guided workflow orchestration. It adds neural client implementations (Hunyuan Shape/Paint/Part, TRELLIS, YVO3D), hybrid pipeline orchestration, strategy classification, workflow proposals, new Blender processing scripts (retopology, rigging, UV unwrap, animation, materials, export), chat route enhancements with streaming and retry logic, and a training/fine-tuning pipeline for code generation. Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant ChatAPI as Chat Route
participant Classifier as Strategy Classifier
participant Advisor as Workflow Advisor
participant Executor as Plan Executor
participant Neural as Neural Provider
participant Blender as Blender/MCP
User->>ChatAPI: Send request + scene snapshot
ChatAPI->>Classifier: Classify strategy
Classifier-->>ChatAPI: strategy (procedural/neural/hybrid)
alt Neural or Hybrid
ChatAPI->>Advisor: Generate workflow proposal
Advisor-->>ChatAPI: Multi-step workflow
ChatAPI-->>User: Stream workflow_proposal event
loop Each Workflow Step
User->>ChatAPI: Execute step
alt Neural Tool
ChatAPI->>Neural: Generate geometry/texture
Neural-->>ChatAPI: Model path
else Blender Agent
ChatAPI->>Executor: Execute plan
Executor->>Blender: Run MCP script
Blender-->>Executor: Result
else Manual
ChatAPI-->>User: Await manual completion
end
ChatAPI-->>User: Stream step_update event
end
else Procedural
ChatAPI->>Executor: Generate & execute plan
Executor->>Blender: Python script via MCP
Blender-->>Executor: Scene state
Executor-->>ChatAPI: Execution result
end
ChatAPI-->>User: Stream planning + commands + logs
sequenceDiagram
participant WorkflowPanel as UI Component
participant StepAPI as Step Route
participant Neural as Neural Client
participant Planner as Blender Planner
participant Executor as Plan Executor
participant MCP as MCP/Blender
WorkflowPanel->>StepAPI: POST execute action
alt recommendedTool = neural
StepAPI->>Neural: Generate model
Neural-->>StepAPI: Model path
else recommendedTool = blender_agent
StepAPI->>Planner: Generate plan
Planner-->>StepAPI: Plan steps
StepAPI->>Executor: Execute plan
Executor->>MCP: Run Python
MCP-->>Executor: Result
else recommendedTool = manual
StepAPI-->>WorkflowPanel: Manual complete
end
StepAPI-->>WorkflowPanel: WorkflowStepResult
WorkflowPanel->>WorkflowPanel: Update step status
WorkflowPanel->>WorkflowPanel: Check blocking
WorkflowPanel->>WorkflowPanel: Show next step
Estimated code review effort🎯 4 (Complex) | ⏱️ ~75 minutes Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 115
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (10)
lib/mcp/screenshot.ts (2)
71-73:⚠️ Potential issue | 🟡 Minor
estimateScreenshotSizereturns decoded binary size, not API payload size.
image.length * 0.75is the estimated size of the decoded binary data (reversing base64 expansion). The actual contribution to an API payload is the base64 string itself, whose length is simplyimage.length. The JSDoc says "Useful for monitoring API payload sizes", which is contradicted by the formula.🐛 Proposed fix
export function estimateScreenshotSize(screenshot: ViewportScreenshotResponse): number { - // Base64 encoding increases size by ~33% - return Math.ceil(screenshot.image.length * 0.75) + // The image field is a base64 string; its length IS the payload cost. + // Multiply by 0.75 only if you need the decoded (binary) byte count. + return screenshot.image.length // bytes contributed to the JSON payload }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@lib/mcp/screenshot.ts` around lines 71 - 73, The function estimateScreenshotSize currently computes the decoded binary size by applying a 0.75 factor; change it to return the actual API payload size (the base64 string length) instead so it matches the JSDoc: use the length of screenshot.image (e.g., return Math.ceil(screenshot.image.length) or simply screenshot.image.length) and remove the 0.75 multiplier; update any inline comment to state it returns the base64 payload size.
17-47:⚠️ Potential issue | 🟠 Major
options.width/height/formatare silently ignored — API contract mismatch.Sending
params: {}means the MCP server never receives the caller-supplied dimensions or format. These options now only act as tertiary fallbacks in the return value (result.width ?? options.width ?? 1920), which are reached only when the server omits those fields. If the server always populatesresult.width/height/format(even with its own defaults),options.*is completely invisible — a caller passing{ width: 512, height: 512, format: "jpeg" }gets a full-resolution PNG back with no indication their request was ignored.Either forward the options to the server, or remove the parameters from the public signature to avoid a misleading contract.
🐛 Proposed fix — restore forwarding of options to MCP params
-export async function getViewportScreenshot(options: { - width?: number - height?: number - format?: "png" | "jpeg" -} = {}): Promise<ViewportScreenshotResponse> { +export async function getViewportScreenshot(options: { + width?: number + height?: number + format?: "png" | "jpeg" +} = {}): Promise<ViewportScreenshotResponse> { const client = createMcpClient() try { const response = await client.execute<ViewportScreenshotResponse>({ type: "get_viewport_screenshot", - params: {}, + params: { + ...(options.width !== undefined && { width: options.width }), + ...(options.height !== undefined && { height: options.height }), + ...(options.format !== undefined && { format: options.format }), + }, })🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@lib/mcp/screenshot.ts` around lines 17 - 47, getViewportScreenshot currently ignores the caller-provided options because it sends params: {} to the MCP; update the call to client.execute inside getViewportScreenshot to forward options (width, height, format) as the request params (e.g., params: { width: options.width, height: options.height, format: options.format }) so the server receives the requested dimensions/format, while keeping the existing fallback logic when result fields are missing; ensure the params keys match the MCP API names and that format is restricted to "png" | "jpeg" as in the function signature.desktop/assets/modelforge-addon.py (1)
757-758:⚠️ Potential issue | 🟠 MajorAdd
use_nodes = Truebefore accessingnode_treeon newly created materials.The code accesses
mat.node_tree.nodesandmat.node_tree.linksimmediately after creating materials withbpy.data.materials.new()at lines 757-758 and 985, without first settinguse_nodes = True. In Blender, depending on the render engine, a newly created material may havenode_tree = None, which would cause anAttributeErrorwhen accessingnode_tree.nodes. Setuse_nodes = Trueto ensure the node tree is available.Proposed fix
mat = bpy.data.materials.new(name=asset_id) + mat.use_nodes = True nodes = mat.node_tree.nodesAnd similarly at line 985:
new_mat = bpy.data.materials.new(name=new_mat_name) + new_mat.use_nodes = True nodes = new_mat.node_tree.nodes🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@desktop/assets/modelforge-addon.py` around lines 757 - 758, After creating a new material with bpy.data.materials.new(name=asset_id) you must enable nodes before accessing mat.node_tree; set mat.use_nodes = True immediately after the material creation (for the variable mat) so subsequent accesses like mat.node_tree.nodes and mat.node_tree.links won’t raise AttributeError; apply the same change for the second material creation site (the other bpy.data.materials.new(...) usage near the later block) to ensure both node trees are available.public/downloads/modelforge-addon.py (3)
1033-1108: 🛠️ Refactor suggestion | 🟠 MajorDuplicate shader node creation — first pass nodes are orphaned by second pass.
The
for map_type, image in texture_images.items()loop (lines 1033–1053) createsShaderNodeNormalMapandShaderNodeDisplacementnodes and connects them. Then the "second pass" (lines 1056–1108) creates new instances of those same node types and reconnects the same BSDF inputs, leaving the first-pass nodes orphaned in the node tree.Remove the normal/displacement handling from the first pass (lines 1040–1052), since the second pass handles it more thoroughly (including ARM and AO logic).
Proposed fix — remove duplicate handling from first pass
# Connect to appropriate input on Principled BSDF if map_type.lower() in ['color', 'diffuse', 'albedo']: links.new(tex_node.outputs['Color'], principled.inputs['Base Color']) elif map_type.lower() in ['roughness', 'rough']: links.new(tex_node.outputs['Color'], principled.inputs['Roughness']) elif map_type.lower() in ['metallic', 'metalness', 'metal']: links.new(tex_node.outputs['Color'], principled.inputs['Metallic']) - elif map_type.lower() in ['normal', 'nor', 'dx', 'gl']: - # Add normal map node - normal_map = nodes.new(type='ShaderNodeNormalMap') - normal_map.location = (x_pos + 200, y_pos) - links.new(tex_node.outputs['Color'], normal_map.inputs['Color']) - links.new(normal_map.outputs['Normal'], principled.inputs['Normal']) - elif map_type.lower() in ['displacement', 'disp', 'height']: - # Add displacement node - disp_node = nodes.new(type='ShaderNodeDisplacement') - disp_node.location = (x_pos + 200, y_pos - 200) - disp_node.inputs['Scale'].default_value = 0.1 # Reduce displacement strength - links.new(tex_node.outputs['Color'], disp_node.inputs['Height']) - links.new(disp_node.outputs['Displacement'], output.inputs['Displacement']) + # Normal, displacement, AO, and ARM are handled in the second pass below🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@public/downloads/modelforge-addon.py` around lines 1033 - 1108, The first-pass loop that iterates texture_images (the for map_type, image in texture_images.items() block) must not create ShaderNodeNormalMap or ShaderNodeDisplacement or link them to principled/output because the second-pass already creates and wires normal and displacement nodes; remove the branches handling map_type.lower() in ['normal','nor','dx','gl'] and ['displacement','disp','height'] (the creation of ShaderNodeNormalMap/ShaderNodeDisplacement and the links.new calls) from that first-pass, leaving only the color/roughness/metallic handling and the y_pos decrement so the second-pass code (which creates normal_map_node and disp_node and links them) is the sole place those special nodes are instantiated and connected.
462-536:⚠️ Potential issue | 🔴 Critical
UnboundLocalErrorin except handler if an early exception occurs.
return_base64is first assigned at line 478 inside thetryblock. If an exception fires before that point (e.g.,bpy.context.screenisNone), theexcepthandler at line 531 references an unbound name, raisingUnboundLocalErrorand masking the real error.Also,
filepathmay still hold the caller-suppliedNoneat that point, so theos.remove(filepath)call would also fail.🐛 Proposed fix — initialize sentinel values before the try block
import os import tempfile import base64 + return_base64 = filepath is None + tmp_created = False + try: # Find the active 3D viewport area = None for a in bpy.context.screen.areas: if a.type == 'VIEW_3D': area = a break if not area: return {"error": "No 3D viewport found"} # Determine file path — use temp if none provided - return_base64 = filepath is None if return_base64: tmp = tempfile.NamedTemporaryFile(suffix=f".{format}", delete=False) filepath = tmp.name tmp.close() + tmp_created = True # ... rest of try ... except Exception as e: # Clean up temp file on error - if return_base64 and filepath: + if tmp_created and filepath: try: os.remove(filepath) except OSError: pass return {"error": str(e)}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@public/downloads/modelforge-addon.py` around lines 462 - 536, Initialize the sentinel variables before the try block to avoid UnboundLocalError: set return_base64 = False and ensure filepath (and any tmp-related variable) is defined (e.g., filepath = None) before entering the try; inside the except handler, only attempt os.remove if filepath is not None and os.path.exists(filepath) to avoid removing a None or non-existent path. Update references to return_base64 and filepath in the try/except and cleanup code (the variables named return_base64 and filepath and the except block that currently removes the temp file) so the except handler never references uninitialized names and only deletes real files.
1271-1278:⚠️ Potential issue | 🟠 MajorError returns a plain string instead of an error dict — client sees
"status": "success".The
_execute_command_internalhandler wraps the return value as{"status": "success", "result": <value>}. Returning a bare string here means the client receives a "success" response containing an error message. This applies tocreate_rodin_job,poll_rodin_job_status, andimport_generated_asset(line 1450).Additionally, the
f"..."has no placeholders (Ruff F541).🐛 Proposed fix (apply same pattern at lines 1347, 1450)
case _: - return f"Error: Unknown Hyper3D Rodin mode!" + return {"error": "Unknown Hyper3D Rodin mode!"}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@public/downloads/modelforge-addon.py` around lines 1271 - 1278, The create_rodin_job function currently returns a plain string (and uses an unnecessary f-string) on unknown mode which causes the RPC wrapper to mark it as success; change the fallback to return an error dict consistent with other handlers (e.g. {"status":"error", "error": "Unknown Hyper3D Rodin mode"}), remove the f-string, and apply the same pattern to poll_rodin_job_status and import_generated_asset so they also return an error dict (not a bare string) when given an unknown mode.lib/ai/index.ts (1)
1-9:⚠️ Potential issue | 🟡 MinorStale module docstring — still references "Gemini 2.5 Pro".
Line 5 reads
"Gemini 2.5 Pro for LLM generation"butDEFAULT_MODELis nowgemini-3.1-pro-preview.📝 Proposed fix
- * - Gemini 2.5 Pro for LLM generation + * - Gemini 3.1 Pro (preview) for LLM generation🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@lib/ai/index.ts` around lines 1 - 9, Update the module docstring in lib/ai/index.ts to reflect the current default LLM by replacing the outdated "Gemini 2.5 Pro for LLM generation" text with "gemini-3.1-pro-preview" (or a short phrase referencing the DEFAULT_MODEL constant) so the header matches the actual DEFAULT_MODEL value; search for the top-level comment block and the DEFAULT_MODEL symbol to ensure consistency between documentation and code.lib/stripe.ts (1)
4-4: 🧹 Nitpick | 🔵 TrivialConsider upgrading the pinned Stripe API version.
The current Stripe API version is
2026-01-28.clover. The code is pinned to2025-02-24.acacia, which is the last monthly version in the prior major release family. Starting with the2024-09-30.acaciarelease, Stripe follows a new API release process; twice a year, a new release (e.g., "clover") is issued with breaking changes. Upgrading to thecloverrelease ensures access to the latest features and types.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@lib/stripe.ts` at line 4, Update the pinned Stripe API version string used when creating the Stripe client: replace the old apiVersion value ('2025-02-24.acacia') with the current major release value ('2026-01-28.clover') wherever the apiVersion option is set (look for the apiVersion key in the Stripe client constructor/config). After changing the string, run any Stripe-related type checks and integration tests and adjust any call sites or types that break due to the new release's breaking changes.GEMINI.md (1)
466-517:⚠️ Potential issue | 🟡 MinorDuplicate and stale sections — the bottom half of the file is an older copy.
Lines 466–567 duplicate the "Agent Rules", "Progress Tracking", "Session Log", and "Key Files Reference" sections that already exist (and are more up-to-date) in Lines 72–150. Notable inconsistencies between them:
- Line 469 references
Claude.md(the old filename), while the top usesGEMINI.md.- Line 501 says "Next.js 15" vs. Line 15 says "Next.js 16".
- Line 502 says "NextAuth v5" while Line 18 says "Supabase Auth (NextAuth fully removed)".
Remove the duplicate bottom sections to avoid drift and confusion.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@GEMINI.md` around lines 466 - 517, Remove the duplicate outdated sections at the bottom of GEMINI.md by deleting the repeated "Agent Rules", "Progress Tracking", "Session Log", and "Key Files Reference" blocks (the copy that references Claude.md, Next.js 15, and NextAuth v5), leaving only the up-to-date top-half content; ensure references to "Claude.md" are changed back to "GEMINI.md" if present and reconcile any remaining version mentions so the file consistently uses the current values (e.g., Next.js 16 and Supabase Auth).lib/orchestration/executor.ts (1)
694-694:⚠️ Potential issue | 🔴 CriticalBug:
mat.use_nodes = Truemissing before accessingmat.node_treeinensureDefaultMaterials.The generated Python creates a new material with
bpy.data.materials.new()and immediately accessesmat.node_tree.nodes.get('Principled BSDF')without callingmat.use_nodes = Truefirst. In Blender 5.x,mat.node_treeisNoneuntiluse_nodesis enabled — this will crash with anAttributeError. The prompts file (line 82, 98) explicitly documents this requirement.Suggested fix
- const script = `import bpy\n\nDEFAULT_NAME = "ModelForge_Default_Material"\nmat = bpy.data.materials.get(DEFAULT_NAME)\nif mat is None:\n mat = bpy.data.materials.new(name=DEFAULT_NAME)\n bsdf = mat.node_tree.nodes.get('Principled BSDF')\n if bsdf:\n bsdf.inputs['Base Color'].default_value = (0.85, 0.82, 0.78, 1.0)\n bsdf.inputs['Roughness'].default_value = 0.4\n\nfor obj_name in [${pythonList}]:\n obj = bpy.data.objects.get(obj_name)\n if not obj or obj.type != 'MESH':\n continue\n if not obj.data.materials:\n obj.data.materials.append(mat)\n else:\n obj.data.materials[0] = mat\n` + const script = `import bpy\n\nDEFAULT_NAME = "ModelForge_Default_Material"\nmat = bpy.data.materials.get(DEFAULT_NAME)\nif mat is None:\n mat = bpy.data.materials.new(name=DEFAULT_NAME)\n mat.use_nodes = True\n bsdf = mat.node_tree.nodes.get('Principled BSDF')\n if bsdf:\n bsdf.inputs['Base Color'].default_value = (0.85, 0.82, 0.78, 1.0)\n bsdf.inputs['Roughness'].default_value = 0.4\n\nfor obj_name in [${pythonList}]:\n obj = bpy.data.objects.get(obj_name)\n if not obj or obj.type != 'MESH':\n continue\n if not obj.data.materials:\n obj.data.materials.append(mat)\n else:\n obj.data.materials[0] = mat\n`The diff adds
mat.use_nodes = True\nafterbpy.data.materials.new(name=DEFAULT_NAME).🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@lib/orchestration/executor.ts` at line 694, The generated Blender Python script in executor.ts (the `script` string used by ensureDefaultMaterials) creates a new material via `bpy.data.materials.new(name=DEFAULT_NAME)` and then accesses `mat.node_tree` without enabling nodes; set `mat.use_nodes = True` immediately after creating the material to ensure `mat.node_tree` is available before calling `mat.node_tree.nodes.get('Principled BSDF')`, updating the `script` string in the `ensureDefaultMaterials`/executor.ts location accordingly.
| def set_camera_looking_slightly_up(camera_obj): | ||
| """CORRECT — X=95° means looking 5° above horizontal.""" | ||
| camera_obj.rotation_euler = (math.radians(95), 0, 0) | ||
|
|
||
| def set_camera_looking_slightly_down(camera_obj): | ||
| """CORRECT — X=80° means looking 10° below horizontal.""" | ||
| camera_obj.rotation_euler = (math.radians(80), 0, 0) | ||
|
|
||
| def set_camera_looking_horizontal(camera_obj): | ||
| """CORRECT — X=90° = perfectly horizontal.""" | ||
| camera_obj.rotation_euler = (math.radians(90), 0, 0) |
There was a problem hiding this comment.
Labeling these as "CORRECT" while zeroing Y/Z may mislead readers
Each function writes the entire rotation_euler tuple with Y=0, Z=0, which silently destroys any existing camera heading and roll. This is fine as a self-contained illustration of X-axis direction, but calling them "CORRECT" without qualification could teach the AI model to always zero out the other axes. A brief note or changing to mutate only rotation_euler[0] would make the intent clearer.
🛡️ Alternative: mutate only the X component
def set_camera_looking_slightly_up(camera_obj):
- """CORRECT — X=95° means looking 5° above horizontal."""
- camera_obj.rotation_euler = (math.radians(95), 0, 0)
+ """CORRECT — X=95° means looking 5° above horizontal (preserves Y/Z heading)."""
+ camera_obj.rotation_euler[0] = math.radians(95)
def set_camera_looking_slightly_down(camera_obj):
- """CORRECT — X=80° means looking 10° below horizontal."""
- camera_obj.rotation_euler = (math.radians(80), 0, 0)
+ """CORRECT — X=80° means looking 10° below horizontal (preserves Y/Z heading)."""
+ camera_obj.rotation_euler[0] = math.radians(80)
def set_camera_looking_horizontal(camera_obj):
- """CORRECT — X=90° = perfectly horizontal."""
- camera_obj.rotation_euler = (math.radians(90), 0, 0)
+ """CORRECT — X=90° = perfectly horizontal (preserves Y/Z heading)."""
+ camera_obj.rotation_euler[0] = math.radians(90)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@data/blender-scripts/blender_api_pitfalls.py` around lines 314 - 324, The
three helper functions set_camera_looking_slightly_up,
set_camera_looking_slightly_down, and set_camera_looking_horizontal overwrite
the entire rotation_euler tuple (zeroing Y and Z) which unintentionally clears
heading/roll; either update their docstrings to warn that Y and Z are
preserved/that these examples intentionally reset other axes, or change the
implementations to only set the X component (e.g., mutate rotation_euler[0] to
math.radians(...)) so Y/Z remain unchanged and existing heading/roll are not
lost.
| def cleanup_neural_mesh( | ||
| obj, | ||
| fix_normals: bool = True, | ||
| remove_doubles: bool = True, | ||
| merge_distance: float = 0.0001, | ||
| fill_holes: bool = True | ||
| ): |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Boolean positional arguments should be keyword-only (Ruff FBT001/FBT002/FBT003)
Multiple function signatures accept boolean flags as positional arguments (fix_normals, remove_doubles, fill_holes, ground_to_floor, cleanup, auto_uv, setup_pbr). Per Ruff FBT001/FBT002, these invite silent call-site errors (e.g., cleanup_neural_mesh(obj, False, True, 0.001) where the meaning of each False/True is opaque). Making them keyword-only (* separator) aligns with Ruff's guidance and improves readability.
♻️ Proposed refactor (example for cleanup_neural_mesh)
def cleanup_neural_mesh(
obj,
+ *,
fix_normals: bool = True,
remove_doubles: bool = True,
merge_distance: float = 0.0001,
fill_holes: bool = True
):Apply the same * separator to normalize_neural_mesh, auto_uv_neural_mesh, and full_neural_import_pipeline, then update the internal call at line 95 to use a keyword argument:
- obj.select_set(True)
+ obj.select_set(state=True)Also applies to: 127-131, 197-201, 282-292
🧰 Tools
🪛 Ruff (0.15.1)
[error] 78-78: Boolean-typed positional argument in function definition
(FBT001)
[error] 78-78: Boolean default positional argument in function definition
(FBT002)
[error] 79-79: Boolean-typed positional argument in function definition
(FBT001)
[error] 79-79: Boolean default positional argument in function definition
(FBT002)
[error] 81-81: Boolean-typed positional argument in function definition
(FBT001)
[error] 81-81: Boolean default positional argument in function definition
(FBT002)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@data/blender-scripts/import_neural_mesh.py` around lines 76 - 82, Update the
function signatures to make boolean flags keyword-only by adding a
positional-only separator (*) before those bool parameters: change
cleanup_neural_mesh(obj, fix_normals: bool = True, remove_doubles: bool = True,
merge_distance: float = 0.0001, fill_holes: bool = True) and the signatures for
normalize_neural_mesh, auto_uv_neural_mesh, and full_neural_import_pipeline so
flags like fix_normals, remove_doubles, fill_holes, ground_to_floor, cleanup,
auto_uv, and setup_pbr are after a *; then update any internal call sites
(notably the call to cleanup_neural_mesh at the previously referenced internal
call) to pass those values as keyword arguments (e.g., cleanup_neural_mesh(obj,
fix_normals=True, ...) ) so calls remain unambiguous.
| bpy.context.view_layer.objects.active = obj | ||
| obj.select_set(True) | ||
| bpy.ops.object.mode_set(mode='EDIT') | ||
| bpy.ops.mesh.select_all(action='SELECT') | ||
|
|
||
| # Remove loose geometry | ||
| bpy.ops.mesh.delete_loose(use_verts=True, use_edges=True, use_faces=False) | ||
|
|
||
| # Merge by distance (remove duplicate vertices) | ||
| if remove_doubles: | ||
| bpy.ops.mesh.remove_doubles(threshold=merge_distance) | ||
|
|
||
| # Recalculate normals (neural meshes often have flipped faces) | ||
| if fix_normals: | ||
| bpy.ops.mesh.normals_make_consistent(inside=False) | ||
|
|
||
| # Fill holes (non-manifold boundaries) | ||
| if fill_holes: | ||
| bpy.ops.mesh.select_all(action='DESELECT') | ||
| bpy.ops.mesh.select_non_manifold() | ||
| try: | ||
| bpy.ops.mesh.fill() | ||
| except RuntimeError: | ||
| pass # Some non-manifold edges can't be filled | ||
|
|
||
| bpy.ops.object.mode_set(mode='OBJECT') | ||
| print(f"Cleaned {obj.name}: {len(obj.data.vertices)} verts, {len(obj.data.polygons)} faces") |
There was a problem hiding this comment.
No try/finally guard — any exception leaves the object stranded in EDIT mode
mode_set(mode='EDIT') is called at line 96 and restored at line 119, but only bpy.ops.mesh.fill() is wrapped in a try/except. Any RuntimeError raised by delete_loose, remove_doubles/merge_by_distance, normals_make_consistent, or select_non_manifold will propagate out while the object remains in EDIT mode, breaking every subsequent operator call in the pipeline. The same pattern repeats in auto_uv_neural_mesh.
🛡️ Proposed fix
bpy.context.view_layer.objects.active = obj
obj.select_set(True)
bpy.ops.object.mode_set(mode='EDIT')
- bpy.ops.mesh.select_all(action='SELECT')
-
- # Remove loose geometry
- bpy.ops.mesh.delete_loose(use_verts=True, use_edges=True, use_faces=False)
-
- # Merge by distance (remove duplicate vertices)
- if remove_doubles:
- bpy.ops.mesh.merge_by_distance(distance=merge_distance)
-
- # Recalculate normals (neural meshes often have flipped faces)
- if fix_normals:
- bpy.ops.mesh.normals_make_consistent(inside=False)
-
- # Fill holes (non-manifold boundaries)
- if fill_holes:
- bpy.ops.mesh.select_all(action='DESELECT')
- bpy.ops.mesh.select_non_manifold()
- try:
- bpy.ops.mesh.fill()
- except RuntimeError:
- pass # Some non-manifold edges can't be filled
-
- bpy.ops.object.mode_set(mode='OBJECT')
+ try:
+ bpy.ops.mesh.select_all(action='SELECT')
+ bpy.ops.mesh.delete_loose(use_verts=True, use_edges=True, use_faces=False)
+ if remove_doubles:
+ bpy.ops.mesh.merge_by_distance(distance=merge_distance)
+ if fix_normals:
+ bpy.ops.mesh.normals_make_consistent(inside=False)
+ if fill_holes:
+ bpy.ops.mesh.select_all(action='DESELECT')
+ bpy.ops.mesh.select_non_manifold()
+ try:
+ bpy.ops.mesh.fill()
+ except RuntimeError:
+ pass
+ finally:
+ bpy.ops.object.mode_set(mode='OBJECT')🧰 Tools
🪛 Ruff (0.15.1)
[error] 95-95: Boolean positional value in function call
(FBT003)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@data/blender-scripts/import_neural_mesh.py` around lines 94 - 120, The block
that sets bpy.ops.object.mode_set(mode='EDIT') (and the similar sequence in
auto_uv_neural_mesh) must be protected with a try/finally so the mode is always
restored to 'OBJECT' even if bpy.ops.mesh.delete_loose,
bpy.ops.mesh.remove_doubles (merge_by_distance),
bpy.ops.mesh.normals_make_consistent, bpy.ops.mesh.select_non_manifold, or other
mesh operators raise an exception; modify the code around the EDIT-mode section
(the sequence starting with bpy.context.view_layer.objects.active = obj /
obj.select_set(True) / bpy.ops.object.mode_set(mode='EDIT')) to perform the mesh
operations inside a try block and call bpy.ops.object.mode_set(mode='OBJECT') in
a finally block (also ensure any temporary selections are cleaned up), and apply
the same try/finally guard to the auto_uv_neural_mesh routine to avoid leaving
objects stranded in EDIT mode.
| # Ground to floor (bottom of bounding box at z=0) | ||
| if ground_to_floor: | ||
| bbox_min_z = min(v.co.z for v in obj.data.vertices) | ||
| obj.location.z -= bbox_min_z |
There was a problem hiding this comment.
Python vertex loop for floor grounding is O(n) on meshes with 100k–500k faces
The script's own docstring calls out 100k–500k+ face counts. Iterating obj.data.vertices in Python at that scale is noticeably slow. obj.bound_box gives the pre-computed 8-corner bounding box in local space — use min(v[2] for v in obj.bound_box) for the same result at O(1) cost.
⚡ Proposed fix
- bbox_min_z = min(v.co.z for v in obj.data.vertices)
+ bbox_min_z = min(v[2] for v in obj.bound_box)
obj.location.z -= bbox_min_z🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@data/blender-scripts/import_neural_mesh.py` around lines 155 - 158, Replace
the O(n) vertex loop that computes bbox_min_z by iterating obj.data.vertices
with a constant-time read of the precomputed bounding box: when ground_to_floor
is true, compute bbox_min_z = min(c[2] for c in obj.bound_box) (or equivalent)
and then adjust obj.location.z -= bbox_min_z; update references to
obj.data.vertices and bbox_min_z accordingly to avoid iterating large meshes.
| bpy.context.view_layer.objects.active = obj | ||
| obj.select_set(True) | ||
| bpy.ops.object.mode_set(mode='EDIT') | ||
| bpy.ops.mesh.select_all(action='SELECT') | ||
|
|
||
| if method == 'SMART': | ||
| bpy.ops.uv.smart_project( | ||
| angle_limit=math.radians(66), | ||
| island_margin=0.02, | ||
| area_weight=0.0, | ||
| scale_to_bounds=True | ||
| ) | ||
| elif method == 'LIGHTMAP': | ||
| bpy.ops.uv.lightmap_pack( | ||
| PREF_CONTEXT='ALL_FACES', | ||
| PREF_PACK_IN_ONE=True, | ||
| PREF_BOX_DIV=12, | ||
| PREF_MARGIN_DIV=0.2 | ||
| ) | ||
|
|
||
| bpy.ops.object.mode_set(mode='OBJECT') |
There was a problem hiding this comment.
No try/finally guard — exception during UV unwrap leaves object in EDIT mode
Same issue as cleanup_neural_mesh: mode_set(mode='EDIT') at line 208 with no finally block, so any RuntimeError from smart_project or lightmap_pack traps the object in EDIT mode.
🛡️ Proposed fix
bpy.context.view_layer.objects.active = obj
obj.select_set(True)
bpy.ops.object.mode_set(mode='EDIT')
- bpy.ops.mesh.select_all(action='SELECT')
-
- if method == 'SMART':
- bpy.ops.uv.smart_project(...)
- elif method == 'LIGHTMAP':
- bpy.ops.uv.lightmap_pack(...)
-
- bpy.ops.object.mode_set(mode='OBJECT')
+ try:
+ bpy.ops.mesh.select_all(action='SELECT')
+ if method == 'SMART':
+ bpy.ops.uv.smart_project(
+ angle_limit=math.radians(66),
+ island_margin=0.02,
+ area_weight=0.0,
+ scale_to_bounds=True
+ )
+ elif method == 'LIGHTMAP':
+ bpy.ops.uv.lightmap_pack(
+ PREF_CONTEXT='ALL_FACES',
+ PREF_PACK_IN_ONE=True,
+ PREF_BOX_DIV=12,
+ PREF_MARGIN_DIV=0.2
+ )
+ else:
+ print(f"[auto_uv_neural_mesh] Unknown UV method '{method}', skipping.")
+ finally:
+ bpy.ops.object.mode_set(mode='OBJECT')🧰 Tools
🪛 Ruff (0.15.1)
[error] 207-207: Boolean positional value in function call
(FBT003)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@data/blender-scripts/import_neural_mesh.py` around lines 206 - 226, The
UV-unwrapping code sets the object to EDIT mode
(bpy.ops.object.mode_set(mode='EDIT')) then calls bpy.ops.uv.smart_project /
bpy.ops.uv.lightmap_pack without a finally guard, so an exception leaves the
object stuck in EDIT mode; wrap the mode switch and the conditional unwrap calls
in a try/finally (or context manager) so that
bpy.ops.object.mode_set(mode='OBJECT') is always executed in the finally block
(also ensure obj.select_set(True) and bpy.context.view_layer.objects.active =
obj remain inside the try), referencing the existing bpy.ops.object.mode_set,
bpy.ops.uv.smart_project, and bpy.ops.uv.lightmap_pack symbols to locate where
to add the try/finally.
| f"https://queue.fal.run/fal-ai/hyper3d/requests/{request_id}/status", | ||
| headers={ | ||
| "Authorization": f"Key {bpy.context.scene.blendermcp_hyper3d_api_key}", | ||
| "Authorization": f"KEY {bpy.context.scene.blendermcp_hyper3d_api_key}", |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
cat public/downloads/modelforge-addon.py | sed -n '1320,1330p'Repository: Ker102/ModelForge
Length of output: 537
🏁 Script executed:
cat public/downloads/modelforge-addon.py | sed -n '1360,1375p'Repository: Ker102/ModelForge
Length of output: 605
🏁 Script executed:
cat public/downloads/modelforge-addon.py | sed -n '1510,1530p'Repository: Ker102/ModelForge
Length of output: 794
🌐 Web query:
fal.ai API authentication Authorization header format documentation
💡 Result:
fal.ai’s docs specify API-key auth via the Authorization header using the Key prefix:
- Header format:
Authorization: Key YOUR_API_KEY(docs.fal.ai) - Admin-scope endpoints use the same format (just an Admin-scope key):
Authorization: Key YOUR_ADMIN_API_KEY(docs.fal.ai)
Citations:
- 1: https://docs.fal.ai/platform-apis/authentication?utm_source=openai
- 2: https://docs.fal.ai/platform-apis/authentication?utm_source=openai
Fix authorization header prefix casing at line 1370: "KEY" should be "Key".
The poll_rodin_job_status_fal_ai method uses "KEY {api_key}" while create_rodin_job_fal_ai (line 1330) and import_generated_asset_fal_ai (line 1523) both use "Key {api_key}". fal.ai's API documentation specifies the exact format as Authorization: Key YOUR_API_KEY (case-sensitive), so this inconsistency will cause 401 authentication failures.
🐛 Proposed fix
- "Authorization": f"KEY {bpy.context.scene.blendermcp_hyper3d_api_key}",
+ "Authorization": f"Key {bpy.context.scene.blendermcp_hyper3d_api_key}",🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@public/downloads/modelforge-addon.py` at line 1370, The Authorization header
in poll_rodin_job_status_fal_ai is using the wrong prefix casing ("KEY") causing
401s; update the header construction to use "Key
{bpy.context.scene.blendermcp_hyper3d_api_key}" to match create_rodin_job_fal_ai
and import_generated_asset_fal_ai and fal.ai's docs, ensuring the Authorization
value is case-sensitive "Key" instead of "KEY".
| if enabled and api_key: | ||
| return {"enabled": True, "message": "Sketchfab integration is enabled and ready to use."} | ||
| elif enabled and not api_key: |
There was a problem hiding this comment.
Dead code — if enabled and api_key is unreachable.
When api_key is truthy, the function always returns from the if api_key: block (lines 1584–1617). Lines 1619–1620 can never execute.
Proposed cleanup
- if enabled and api_key:
- return {"enabled": True, "message": "Sketchfab integration is enabled and ready to use."}
- elif enabled and not api_key:
+ if enabled and not api_key:
return {📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if enabled and api_key: | |
| return {"enabled": True, "message": "Sketchfab integration is enabled and ready to use."} | |
| elif enabled and not api_key: | |
| if enabled and not api_key: | |
| return { |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@public/downloads/modelforge-addon.py` around lines 1619 - 1621, The branch
"if enabled and api_key" is dead because the earlier "if api_key:" block always
returns when api_key is truthy; remove the unreachable condition (or merge its
intent) and simplify the function by keeping the single "if api_key:" handling
and the subsequent "elif enabled and not api_key" branch, or move the
enabled/api_key checks into a single conditional that returns the correct dict;
look for the function containing the "if api_key:" block and the variables
enabled and api_key and remove or refactor the redundant "if enabled and
api_key" branch.
|
|
||
| import http from "http" | ||
|
|
||
| const PORT = Number(process.env.MOCK_NEURAL_PORT || 8090) |
There was a problem hiding this comment.
Number() on an invalid env-var string silently produces NaN.
Number("invalid") → NaN; server.listen(NaN) is undefined behaviour in Node.js — in practice it may bind to an OS-assigned port (port 0 semantics) with no warning, silently ignoring MOCK_NEURAL_PORT.
🛡️ Proposed fix with validation
-const PORT = Number(process.env.MOCK_NEURAL_PORT || 8090)
+const rawPort = process.env.MOCK_NEURAL_PORT
+const PORT = rawPort ? parseInt(rawPort, 10) : 8090
+if (isNaN(PORT) || PORT < 1 || PORT > 65535) {
+ console.error(`[mock-neural] Invalid MOCK_NEURAL_PORT: "${rawPort}"`)
+ process.exit(1)
+}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const PORT = Number(process.env.MOCK_NEURAL_PORT || 8090) | |
| const rawPort = process.env.MOCK_NEURAL_PORT | |
| const PORT = rawPort ? parseInt(rawPort, 10) : 8090 | |
| if (isNaN(PORT) || PORT < 1 || PORT > 65535) { | |
| console.error(`[mock-neural] Invalid MOCK_NEURAL_PORT: "${rawPort}"`) | |
| process.exit(1) | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@scripts/mock-neural-server.ts` at line 15, The code uses
Number(process.env.MOCK_NEURAL_PORT || 8090) which yields NaN for invalid
strings and can cause server.listen(NaN) to behave unpredictably; change the
logic that computes the PORT constant (referencing process.env.MOCK_NEURAL_PORT
and PORT) to explicitly parse and validate the env var (e.g. parseInt or Number
then isFinite check), log or throw on invalid values, and fallback to the
default 8090 only when the parsed value is a valid positive integer before
calling server.listen so the server never receives NaN.
| const chunks: Buffer[] = [] | ||
| req.on("data", (chunk: Buffer) => chunks.push(chunk)) | ||
| req.on("end", () => { | ||
| const body = Buffer.concat(chunks).toString("utf-8") | ||
| console.log(`[mock-neural] /generate request received (${body.length} bytes)`) | ||
|
|
||
| // Simulate ~1s processing time | ||
| setTimeout(() => { | ||
| res.writeHead(200, { | ||
| "Content-Type": "model/gltf-binary", | ||
| "Content-Length": String(MOCK_GLB.length), | ||
| }) | ||
| res.end(MOCK_GLB) | ||
| console.log(`[mock-neural] Returned ${MOCK_GLB.length} byte GLB`) | ||
| }, 1000) | ||
| }) | ||
| return |
There was a problem hiding this comment.
Missing req.on("error") handler — aborted connections crash the process.
In Node.js, an unhandled error event on an EventEmitter throws synchronously, terminating the process. If the client disconnects before the request body is fully received, the IncomingMessage stream emits error (typically ECONNRESET). This is especially likely during automated tests that don't always drain the response.
🛡️ Proposed fix
const chunks: Buffer[] = []
req.on("data", (chunk: Buffer) => chunks.push(chunk))
+ req.on("error", (err) => {
+ console.error(`[mock-neural] Request error: ${err.message}`)
+ // Response may already be gone — attempt a 500 if headers not yet sent
+ if (!res.headersSent) {
+ res.writeHead(500, { "Content-Type": "application/json" })
+ res.end(JSON.stringify({ error: "Request error" }))
+ }
+ })
req.on("end", () => {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@scripts/mock-neural-server.ts` around lines 149 - 165, The request handler
that collects body chunks (the block using const chunks: Buffer[] and
req.on("data"/"end")) lacks an error listener, so aborted connections can emit
an unhandled 'error' on req and crash the process; add a req.on("error", (err)
=> { /* log and early return/cleanup */ }) next to the existing listeners to log
the error (include err.message), stop further processing, and ensure you don't
call res.end(MOCK_GLB) when the request has errored or been aborted; optionally
also attach a res.on("error") to catch write errors when sending MOCK_GLB.
| server.listen(PORT, () => { | ||
| console.log(`\n🧠 Mock Neural Server running on http://localhost:${PORT}`) | ||
| console.log(` GET /health → health check`) | ||
| console.log(` POST /generate → returns minimal GLB cube mesh\n`) | ||
| }) |
There was a problem hiding this comment.
Missing server.on("error") handler — EADDRINUSE crashes the process.
If PORT is already bound, Node.js emits an error event on the server. Without a listener, this is an uncaught exception that terminates the process immediately, with no user-friendly message.
🛡️ Proposed fix
server.listen(PORT, () => {
console.log(`\n🧠 Mock Neural Server running on http://localhost:${PORT}`)
console.log(` GET /health → health check`)
console.log(` POST /generate → returns minimal GLB cube mesh\n`)
})
+
+server.on("error", (err: NodeJS.ErrnoException) => {
+ if (err.code === "EADDRINUSE") {
+ console.error(`[mock-neural] Port ${PORT} is already in use. Set MOCK_NEURAL_PORT to a different port.`)
+ } else {
+ console.error(`[mock-neural] Server error: ${err.message}`)
+ }
+ process.exit(1)
+})📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| server.listen(PORT, () => { | |
| console.log(`\n🧠 Mock Neural Server running on http://localhost:${PORT}`) | |
| console.log(` GET /health → health check`) | |
| console.log(` POST /generate → returns minimal GLB cube mesh\n`) | |
| }) | |
| server.listen(PORT, () => { | |
| console.log(`\n🧠 Mock Neural Server running on http://localhost:${PORT}`) | |
| console.log(` GET /health → health check`) | |
| console.log(` POST /generate → returns minimal GLB cube mesh\n`) | |
| }) | |
| server.on("error", (err: NodeJS.ErrnoException) => { | |
| if (err.code === "EADDRINUSE") { | |
| console.error(`[mock-neural] Port ${PORT} is already in use. Set MOCK_NEURAL_PORT to a different port.`) | |
| } else { | |
| console.error(`[mock-neural] Server error: ${err.message}`) | |
| } | |
| process.exit(1) | |
| }) |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@scripts/mock-neural-server.ts` around lines 173 - 177, The server currently
calls server.listen(PORT, ...) without an error listener, so an EADDRINUSE will
crash the process; add server.on("error", (err) => { if (err && err.code ===
"EADDRINUSE") log a clear message including PORT and exit non‑zero
(process.exit(1)); otherwise log the error and rethrow or exit } ) adjacent to
the existing server.listen call to gracefully handle port-in-use and other
server errors; reference the existing server variable, PORT constant, and the
server.listen callback when adding this handler.
Summary
Blender 5.x API compatibility fixes and planner edit awareness improvements from the 2026-02-21 session.
Changes
Blender 5.x API Fixes
prompts.ts: FixedALPHA_BLEND→BLEND, strengthenedshadow_method/shadow_modeAVOID rules with explicit valid valuesapi_version_compatibility.py: Added 3 new sections (19-21):blend_methodvalid values, EEVEE removed properties,create_transparent_material()patterneevee_setup.py: Rewrote for 5.x — removeduse_ssr,use_gtao,use_bloom,taa_render_samples. Replaced bloom with compositor Glare nodetoon_setup.py: Addeduse_nodes = True, removeduse_ssr/shadow_cascade_sizePlanner Edit Awareness
executor.ts: Capturesget_scene_info/get_all_object_infostructured data (name, type, location, dimensions) and injects into everygenerateCode()call as## Current Scene Objectsroute.ts: Scene snapshot now returns structured JSON instead of formatted string. Object cap increased 12 → 30prompts.ts: Edit rule 5 mandates referencing existing objects by exact name + coordinates, never recreatingOther
tsc --noEmitpasses with 0 errorsSummary by CodeRabbit
Release Notes
New Features
Improvements
Documentation