Skip to content

feat: Blender 5.x API fixes + planner edit awareness#20

Merged
Ker102 merged 31 commits intomainfrom
feat/blender-5x-api-fixes-and-edit-awareness
Feb 21, 2026
Merged

feat: Blender 5.x API fixes + planner edit awareness#20
Ker102 merged 31 commits intomainfrom
feat/blender-5x-api-fixes-and-edit-awareness

Conversation

@Ker102
Copy link
Owner

@Ker102 Ker102 commented Feb 20, 2026

Summary

Blender 5.x API compatibility fixes and planner edit awareness improvements from the 2026-02-21 session.

Changes

Blender 5.x API Fixes

  • prompts.ts: Fixed ALPHA_BLENDBLEND, strengthened shadow_method/shadow_mode AVOID rules with explicit valid values
  • api_version_compatibility.py: Added 3 new sections (19-21): blend_method valid values, EEVEE removed properties, create_transparent_material() pattern
  • eevee_setup.py: Rewrote for 5.x — removed use_ssr, use_gtao, use_bloom, taa_render_samples. Replaced bloom with compositor Glare node
  • toon_setup.py: Added use_nodes = True, removed use_ssr/shadow_cascade_size

Planner Edit Awareness

  • executor.ts: Captures get_scene_info/get_all_object_info structured data (name, type, location, dimensions) and injects into every generateCode() call as ## Current Scene Objects
  • route.ts: Scene snapshot now returns structured JSON instead of formatted string. Object cap increased 12 → 30
  • prompts.ts: Edit rule 5 mandates referencing existing objects by exact name + coordinates, never recreating

Other

  • Stream timeout increased 60s → 180s for deep thinking models
  • Re-ingested all 135 scripts into pgvector
  • tsc --noEmit passes with 0 errors

Summary by CodeRabbit

Release Notes

  • New Features

    • Added guided workflow interface for step-by-step 3D generation tasks.
    • Introduced neural 3D generation with multiple providers (Hunyuan, TRELLIS, YVO3D).
    • Implemented strategy classification system for procedural, neural, and hybrid generation approaches.
    • Added comprehensive 3D production pipeline tools: retopology, rigging, UV mapping, animation, and PBR materials.
    • Introduced hybrid pipeline orchestrator combining multiple generation stages.
    • Added visual feedback loop for real-time scene analysis and corrections.
  • Improvements

    • Enhanced error handling with automatic retry capabilities.
    • Updated Blender 5.x API compatibility across all scripts.
  • Documentation

    • Added comprehensive guides for 3D pipeline strategy, integration architecture, and production workflows.

- Created scripts/generate-training-data.ts (parses 125 RAG scripts into instruction-output pairs)
- Generated 269 training pairs (125 full-script + 144 function-level) in training/training_data.jsonl
- Created training/eval_prompts.json (50 held-out test prompts across all categories)
- Created training/train_blender_codegen.py (QLoRA 4-bit NF4 for Azure A100, targets Qwen3-8B)
- New RAG script: displacement_textures.py (raked sand, water ripples, rocky terrain, modifier displacement)
- Re-ingested 125 scripts into pgvector
- New: hdri_lighting.py (HDRI environment, gradient sky, solid color world)
- New: photorealistic_materials.py (image-based PBR + procedural stone/wood)
- Updated: blender_api_pitfalls.py (+pitfall 16: noise_scale, +pitfall 17: camera X rotation)
- Re-ingested 127 scripts into pgvector
- Regenerated training data: 281 pairs (up from 269)
- New: interior_rooms.py (create_room, wall_with_doorway, wall_with_window)
- Real-world scale reference table for interiors
- Re-ingested 128 scripts into pgvector
- Training data regenerated (286 pairs)
…back loop

- Import suggestImprovements from lib/ai/vision.ts into executor
- Insert 140-line correction loop between viewport switch (step 4) and audit (step 5)
- Capture viewport screenshot -> Gemini Vision analyzes -> generate correction code
- Only correct high-priority issues (agent decides based on confidence)
- Max 2 iterations, entirely non-fatal on any error
- Add AgentVisualAnalysis and AgentVisualCorrection stream event types
- Enable enableVisualFeedback: true in chat route by default
…erHub

- Scraped 5 BlenderHub pages (top 50, free, geo nodes, animation, rendering)
- Cataloged built-in addons (zero install), free community addons, paid addons
- Documented Python API access patterns (bpy.ops.addon_name)
- Added 3-phase integration strategy (built-in → free → paid)
- Sources: BlenderHub, DuetPBR
- auto_retopology.py: Voxel/Quadriflow remesh, decimation, mesh repair
- auto_rigify.py: Rigify metarig templates, rig generation, auto weights
- auto_uv_unwrap.py: Shape-based auto UV, lightmap, texel density
- procedural_animation.py: Orbit, wave, pendulum, spring, NLA, dolly zoom
- pbr_texture_loader.py: PBR map loader, folder discovery, baking
- model_export.py: LOD chain, Game/VFX/Web/Print presets, USD, validation
- prompts.ts: Added PRODUCTION PIPELINE hints to CODE_GENERATION_PROMPT
- Re-ingested 134 scripts into pgvector
New lib/neural/ module with 5 provider clients:
- Hunyuan Shape 2.1 (geometry, self-hosted, 10GB VRAM)
- Hunyuan Paint 2.1 (PBR texturing, self-hosted, 21GB VRAM)
- Hunyuan Part (mesh segmentation, Gradio/HF)
- TRELLIS 2 (Microsoft, MIT, geometry+PBR, 24GB VRAM)
- YVO3D (premium texturing API, up to ULTIMA 8K)

Core: types.ts, base-client.ts, registry.ts, index.ts
Hybrid pipeline: 8-stage orchestrator (neural→Blender→export)
RAG: import_neural_mesh.py (import, cleanup, normalize, decimate, UV, PBR)
Prompts: neural vs procedural decision rules
Stream events: AgentNeuralGeneration, AgentHybridPipeline
…ural/hybrid

- New strategy-types.ts: Strategy, StrategyDecision, StrategyOverride types
- New strategy-router.ts: Two-phase classification (keyword patterns + LLM fallback)
  - 7 procedural, 5 neural, 4 hybrid regex pattern groups
  - Gemini JSON-mode LLM fallback for ambiguous requests
  - User override support for manual strategy selection
- types.ts: AgentStrategyClassification stream event, strategyDecision in PlanningMetadata
- planner.ts: Injects neural context when strategy is neural/hybrid
- executor.ts: Accepts strategyDecision in ExecutionOptions
- route.ts: Classifies between scene snapshot and planning, emits strategy event
- tsc --noEmit: 0 errors
- New workflow-types.ts: WorkflowStep, WorkflowProposal, step actions/statuses
- New workflow-advisor.ts: LLM per-step tool recommendations + static fallback
- New workflow-step/route.ts: per-step execution API (neural/blender/skip/manual)
- New workflow-panel.tsx: step cards, Execute/Manual/Skip buttons, progress bar
- New mock-neural-server.ts: returns valid minimal GLB for testing
- Chat route: neural/hybrid requests generate WorkflowProposal instead of auto-executing
- Stream events: AgentWorkflowProposal, AgentWorkflowStepUpdate
- project-chat.tsx: handles workflow_proposal event, renders WorkflowPanel
- TypeScript: tsc --noEmit passes with 0 errors
…ender 4.x

The prompt previously told the LLM to SKIP use_nodes because it was
'deprecated in Blender 5.x'. On Blender 4.x, this causes mat.node_tree
to be None, crashing every material creation step.

Fixed: pattern now always calls mat.use_nodes = True (harmless no-op on
5.x, essential on 4.x). Removed the incorrect AVOID entry.
…gging

1. Step result log entries now include logType: 'execute' instead of undefined
2. Screenshot extraction tries 3 data paths (result.image, top-level, double-nested)
3. Logs full response shape when screenshot capture fails for easier debugging
- AbortController with 60s stream stall timeout detection
- Classify errors as retryable (network/timeout/abort) vs non-retryable
- Keep partial progress on retryable errors instead of removing assistant msg
- handleRetry() re-sends the saved payload
- Retry button in error display with destructive outline styling
- Clear retry state in handleStartNew
1. Remove width/height/format from get_viewport_screenshot — addon
   returned 'unexpected keyword argument width' error
2. Add planning rule 10: never delete existing lights when editing
   a scene unless user explicitly asks — prevents black rendered view
3. Fix in both executor.ts and screenshot.ts
- Trim BLENDER_SYSTEM_PROMPT to essential identity (target: Blender 5.0+)
- Planning rule 5: distinguish NEW SCENE vs EDIT SCENE flows
- Blender API section: target 5.x only, remove version compat noise
- Remove dead BLENDER_FEW_SHOT_EXAMPLES (never imported, RAG replaces)
- Condense boolean operations section
- Fix screenshot params (no width/height/format — addon rejects them)
- Add planning rule 10: preserve lights when editing scenes
Shrine test step 6 crashed on 'Subsurface Color' which doesn't exist
in Blender 5.x. Added all 5 renamed/removed sockets to the AVOID
section with explicit 'will crash' warning for stronger LLM signal.
1. Addon get_viewport_screenshot: when no filepath given, now uses
   temp file, captures viewport, encodes as base64, returns inline.
   Root cause: 'if not filepath: return {error: No filepath}' — we
   never sent a filepath.
2. Added shadow_method/shadow_mode to AVOID section — fog layer step
   crashed 3x in edit shrine test on these non-existent attributes.
3. Synced addon to public/downloads.
1. Stream stall timeout: 60s -> 180s (deep thinking models need more)
2. Rewrote eevee_setup.py for Blender 5.x — removed use_ssr, use_gtao,
   use_bloom, taa_render_samples, shadow_cascade_size; replaced bloom
   with compositor Glare node approach
3. Fixed toon_setup.py: added use_nodes=True, removed deprecated attrs
4. Added EEVEE SSR attributes to AVOID section in prompts.ts
5. Re-ingested all scripts into pgvector
1. Added planning rule 11: OBJECT GROUNDING — objects on floor Z=0,
   wall-mounted objects flush with wall surface, explicit coordinates
2. Added planning rule 12: LIGHTING ENERGY — minimum 1000W point
   lights for dark indoor scenes, 500W+ fill lights
3. Increased code gen light energy guidance (500W -> 1000W for key)
4. Fixed missing 'import os' in addon get_viewport_screenshot
…cript

Added sections 19-21 to api_version_compatibility.py:
- 19: blend_method valid values (OPAQUE/CLIP/HASHED/BLEND), shadow_method removed
- 20: All removed EEVEE 5.x properties (SSR, bloom, GTAO, shadow_cascade)
- 21: create_transparent_material() helper function
Re-ingested all scripts into pgvector.
…ON, stronger rules

1. executor.ts: Capture get_scene_info/get_all_object_info results and
   inject structured object list (name, type, location, dimensions) into
   every generateCode() call as 'Current Scene Objects' context
2. route.ts: Scene snapshot now returns structured JSON instead of
   formatted string. Object cap increased 12 -> 30.
3. prompts.ts: Edit rule now mandates referencing existing objects by
   exact name + coordinates, never recreating objects that exist
@github-actions github-actions bot added documentation Improvements or additions to documentation dependencies Pull requests that update a dependency file configuration frontend backend desktop scripts labels Feb 20, 2026
@coderabbitai
Copy link

coderabbitai bot commented Feb 20, 2026

Caution

Review failed

Failed to post review comments

📝 Walkthrough

Walkthrough

This PR introduces a comprehensive 3D generation pipeline with neural provider integration, strategy-based routing, and guided workflow orchestration. It adds neural client implementations (Hunyuan Shape/Paint/Part, TRELLIS, YVO3D), hybrid pipeline orchestration, strategy classification, workflow proposals, new Blender processing scripts (retopology, rigging, UV unwrap, animation, materials, export), chat route enhancements with streaming and retry logic, and a training/fine-tuning pipeline for code generation.

Changes

Cohort / File(s) Summary
Strategy Classification & Routing
lib/orchestration/strategy-types.ts, lib/orchestration/strategy-router.ts
New strategy classification system that categorizes requests as procedural, neural, or hybrid using keyword patterns with optional LLM fallback. Routes user requests to appropriate generation approaches based on confidence-weighted scoring.
Neural 3D Generation Framework
lib/neural/types.ts, lib/neural/base-client.ts, lib/neural/index.ts, lib/neural/registry.ts
Core abstraction and registry for neural 3D providers. Defines request/result contracts, pipeline stages, provider metadata, and utilities for model downloading, buffer handling, image conversion, and polling.
Neural Provider Implementations
lib/neural/providers/hunyuan-shape.ts, lib/neural/providers/hunyuan-paint.ts, lib/neural/providers/hunyuan-part.ts, lib/neural/providers/trellis.ts, lib/neural/providers/yvo3d.ts, lib/neural/gradio-client.d.ts
Five concrete neural 3D generation clients (Hunyuan Shape for geometry, Hunyuan Paint for texturing, Hunyuan Part for segmentation, TRELLIS for image-to-3D, YVO3D for texture generation). Includes Gradio client type definitions for optional Gradio-based providers.
Hybrid Pipeline Orchestration
lib/neural/hybrid-pipeline.ts
End-to-end orchestrator chaining geometry generation, texturing, Blender import, retopology, segmentation, rigging, animation, and export with stage-by-stage progress tracking and graceful degradation support.
Chat Route Enhancement
app/api/ai/chat/route.ts
Refactored chat endpoint to support strategy classification, workflow proposal generation, structured JSON scene snapshots, neural/hybrid workflow branching, and enhanced planning metadata with scene context and strategy decisions.
Workflow Step API
app/api/ai/workflow-step/route.ts
New POST endpoint handling workflow step actions (execute, skip, manual_done) with support for neural execution, Blender agent tasks, and manual steps. Routes to appropriate execution backend based on recommendedTool.
Workflow Orchestration & Planning
lib/orchestration/workflow-types.ts, lib/orchestration/workflow-advisor.ts, lib/orchestration/planner.ts
Workflow proposal generation and step definitions. Advisor recommends multi-step workflows with tool assignments and fallback deterministic templates. Planner integrates strategy decisions into request enhancement.
Execution & Vision Integration
lib/orchestration/types.ts, lib/orchestration/executor.ts
Extended stream event types for visual analysis, corrections, neural generation, hybrid pipeline, strategy classification, and workflow updates. Executor adds visual feedback loops with vision-based improvement suggestions and dynamic scene context injection.
Chat UI Component
components/projects/project-chat.tsx
Major rewrite adding streaming response handling with stall-timeout abort, server-sent event processing, retry semantics with state tracking, workflow panel integration, planning metadata display, and MCP command rendering.
Workflow Panel Component
components/projects/workflow-panel.tsx
New React component to render multi-step guided workflows with per-step status, action buttons, progress bar, blocking dependencies, execution state management, and API integration for step execution/skip/manual actions.
Blender Script Library Expansion
data/blender-scripts/auto_retopology.py, data/blender-scripts/auto_rigify.py, data/blender-scripts/auto_uv_unwrap.py, data/blender-scripts/api_version_compatibility.py, data/blender-scripts/displacement_textures.py, data/blender-scripts/hdri_lighting.py, data/blender-scripts/import_neural_mesh.py, data/blender-scripts/interior_rooms.py, data/blender-scripts/pbr_texture_loader.py, data/blender-scripts/photorealistic_materials.py, data/blender-scripts/procedural_animation.py, data/blender-scripts/blender_api_pitfalls.py, data/blender-scripts/model_export.py
13 new Blender Python modules providing automated retopology (voxel/Quadriflow), Rigify rigging pipeline, smart UV unwrapping, material creation (PBR/shader-based), displacement workflows, HDRI/world setup, neural mesh import pipeline, interior scene generation, texture baking, procedural animation, and production export presets.
Blender Rendering Updates
data/blender-scripts/tasks/rendering/eevee_setup.py, data/blender-scripts/tasks/rendering/toon_setup.py
Updated rendering scripts for Blender 5.x compatibility. Removed deprecated Eevee properties, introduced compositor-based bloom, switched to taa_samples configuration, and updated documentation to reflect 5.x behavior changes.
Model & Prompt Updates
lib/ai/index.ts, lib/gemini.ts, lib/stripe.ts, lib/ai/prompts.ts
Updated default Gemini model from 2.5-pro to 3.1-pro-preview. Significantly expanded system prompts with Blender 5.x-specific guidance, conditional planning rules for new vs. edit scenes, production pipeline capabilities, neural/hybrid approaches, scene grounding, material patterns, and removed obsolete FEW_SHOT_EXAMPLES.
Desktop & Web Addons
desktop/assets/modelforge-addon.py, public/downloads/modelforge-addon.py
Enhanced viewport screenshot handling with optional base64 encoding. Updated Polyhaven/Hyper3D/Sketchfab integration for texture application. Added temporary file handling for inline image export and improved error cleanup paths.
MCP Screenshot Integration
lib/mcp/screenshot.ts
Removed explicit width/height/format defaults from request payload, relying on server response and fallback logic to provide dimensions and format.
Training & Fine-tuning
scripts/generate-training-data.ts, training/train_blender_codegen.py, training/requirements_training.txt, training/eval_prompts.json
New training pipeline to generate JSONL training pairs from Blender scripts (function and full-script level), evaluate with 50 diverse prompts, and fine-tune a Qwen3 LM with 4-bit NF4 quantization and LoRA for Blender code generation.
Mock Neural Server & Documentation
scripts/mock-neural-server.ts, docs/3d-pipeline-integration.md, docs/3d-pipeline-strategy.md, docs/HANDOFF.md, docs/addon-integration-roadmap.md, docs/research-pipeline-techniques.md, package.json, GEMINI.md
New mock HTTP server simulating neural generator responses. Comprehensive documentation covering 3D pipeline architecture, strategy comparison, addon integration priorities, research techniques, session handoff notes. Updated package.json with mock:neural script and training dependencies.

Sequence Diagram(s)

sequenceDiagram
    participant User
    participant ChatAPI as Chat Route
    participant Classifier as Strategy Classifier
    participant Advisor as Workflow Advisor
    participant Executor as Plan Executor
    participant Neural as Neural Provider
    participant Blender as Blender/MCP

    User->>ChatAPI: Send request + scene snapshot
    ChatAPI->>Classifier: Classify strategy
    Classifier-->>ChatAPI: strategy (procedural/neural/hybrid)
    
    alt Neural or Hybrid
        ChatAPI->>Advisor: Generate workflow proposal
        Advisor-->>ChatAPI: Multi-step workflow
        ChatAPI-->>User: Stream workflow_proposal event
        
        loop Each Workflow Step
            User->>ChatAPI: Execute step
            alt Neural Tool
                ChatAPI->>Neural: Generate geometry/texture
                Neural-->>ChatAPI: Model path
            else Blender Agent
                ChatAPI->>Executor: Execute plan
                Executor->>Blender: Run MCP script
                Blender-->>Executor: Result
            else Manual
                ChatAPI-->>User: Await manual completion
            end
            ChatAPI-->>User: Stream step_update event
        end
    else Procedural
        ChatAPI->>Executor: Generate & execute plan
        Executor->>Blender: Python script via MCP
        Blender-->>Executor: Scene state
        Executor-->>ChatAPI: Execution result
    end
    
    ChatAPI-->>User: Stream planning + commands + logs
Loading
sequenceDiagram
    participant WorkflowPanel as UI Component
    participant StepAPI as Step Route
    participant Neural as Neural Client
    participant Planner as Blender Planner
    participant Executor as Plan Executor
    participant MCP as MCP/Blender

    WorkflowPanel->>StepAPI: POST execute action
    
    alt recommendedTool = neural
        StepAPI->>Neural: Generate model
        Neural-->>StepAPI: Model path
    else recommendedTool = blender_agent
        StepAPI->>Planner: Generate plan
        Planner-->>StepAPI: Plan steps
        StepAPI->>Executor: Execute plan
        Executor->>MCP: Run Python
        MCP-->>Executor: Result
    else recommendedTool = manual
        StepAPI-->>WorkflowPanel: Manual complete
    end
    
    StepAPI-->>WorkflowPanel: WorkflowStepResult
    WorkflowPanel->>WorkflowPanel: Update step status
    WorkflowPanel->>WorkflowPanel: Check blocking
    WorkflowPanel->>WorkflowPanel: Show next step
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~75 minutes

Poem

🐰 A rabbit's rejoicing in code so grand,
With neural minds building models from sand,
Workflows like warrens, each step on its way,
3D pipelines blooming—hooray, hooray!
Strategy routers guide paths through the night,
While Blender scripts polish each mesh, oh so bright!

🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Title check ✅ Passed The title accurately summarizes the main changes: Blender 5.x API fixes and planner edit awareness are the primary objectives listed in the PR description.
Description check ✅ Passed The PR description covers key sections (summary, Blender 5.x fixes, planner edit awareness, other improvements) but lacks detailed testing information and is missing several template sections like motivation/context and testing methodology.
Docstring Coverage ✅ Passed Docstring coverage is 90.48% which is sufficient. The required threshold is 80.00%.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
  • 📝 Generate docstrings (stacked PR)
  • 📝 Generate docstrings (commit on current branch)
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch feat/blender-5x-api-fixes-and-edit-awareness

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 115

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (10)
lib/mcp/screenshot.ts (2)

71-73: ⚠️ Potential issue | 🟡 Minor

estimateScreenshotSize returns decoded binary size, not API payload size.

image.length * 0.75 is the estimated size of the decoded binary data (reversing base64 expansion). The actual contribution to an API payload is the base64 string itself, whose length is simply image.length. The JSDoc says "Useful for monitoring API payload sizes", which is contradicted by the formula.

🐛 Proposed fix
 export function estimateScreenshotSize(screenshot: ViewportScreenshotResponse): number {
-    // Base64 encoding increases size by ~33%
-    return Math.ceil(screenshot.image.length * 0.75)
+    // The image field is a base64 string; its length IS the payload cost.
+    // Multiply by 0.75 only if you need the decoded (binary) byte count.
+    return screenshot.image.length  // bytes contributed to the JSON payload
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@lib/mcp/screenshot.ts` around lines 71 - 73, The function
estimateScreenshotSize currently computes the decoded binary size by applying a
0.75 factor; change it to return the actual API payload size (the base64 string
length) instead so it matches the JSDoc: use the length of screenshot.image
(e.g., return Math.ceil(screenshot.image.length) or simply
screenshot.image.length) and remove the 0.75 multiplier; update any inline
comment to state it returns the base64 payload size.

17-47: ⚠️ Potential issue | 🟠 Major

options.width/height/format are silently ignored — API contract mismatch.

Sending params: {} means the MCP server never receives the caller-supplied dimensions or format. These options now only act as tertiary fallbacks in the return value (result.width ?? options.width ?? 1920), which are reached only when the server omits those fields. If the server always populates result.width/height/format (even with its own defaults), options.* is completely invisible — a caller passing { width: 512, height: 512, format: "jpeg" } gets a full-resolution PNG back with no indication their request was ignored.

Either forward the options to the server, or remove the parameters from the public signature to avoid a misleading contract.

🐛 Proposed fix — restore forwarding of options to MCP params
-export async function getViewportScreenshot(options: {
-    width?: number
-    height?: number
-    format?: "png" | "jpeg"
-} = {}): Promise<ViewportScreenshotResponse> {
+export async function getViewportScreenshot(options: {
+    width?: number
+    height?: number
+    format?: "png" | "jpeg"
+} = {}): Promise<ViewportScreenshotResponse> {
     const client = createMcpClient()

     try {
         const response = await client.execute<ViewportScreenshotResponse>({
             type: "get_viewport_screenshot",
-            params: {},
+            params: {
+                ...(options.width !== undefined && { width: options.width }),
+                ...(options.height !== undefined && { height: options.height }),
+                ...(options.format !== undefined && { format: options.format }),
+            },
         })
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@lib/mcp/screenshot.ts` around lines 17 - 47, getViewportScreenshot currently
ignores the caller-provided options because it sends params: {} to the MCP;
update the call to client.execute inside getViewportScreenshot to forward
options (width, height, format) as the request params (e.g., params: { width:
options.width, height: options.height, format: options.format }) so the server
receives the requested dimensions/format, while keeping the existing fallback
logic when result fields are missing; ensure the params keys match the MCP API
names and that format is restricted to "png" | "jpeg" as in the function
signature.
desktop/assets/modelforge-addon.py (1)

757-758: ⚠️ Potential issue | 🟠 Major

Add use_nodes = True before accessing node_tree on newly created materials.

The code accesses mat.node_tree.nodes and mat.node_tree.links immediately after creating materials with bpy.data.materials.new() at lines 757-758 and 985, without first setting use_nodes = True. In Blender, depending on the render engine, a newly created material may have node_tree = None, which would cause an AttributeError when accessing node_tree.nodes. Set use_nodes = True to ensure the node tree is available.

Proposed fix
             mat = bpy.data.materials.new(name=asset_id)
+            mat.use_nodes = True
             nodes = mat.node_tree.nodes

And similarly at line 985:

             new_mat = bpy.data.materials.new(name=new_mat_name)
+            new_mat.use_nodes = True
             nodes = new_mat.node_tree.nodes
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@desktop/assets/modelforge-addon.py` around lines 757 - 758, After creating a
new material with bpy.data.materials.new(name=asset_id) you must enable nodes
before accessing mat.node_tree; set mat.use_nodes = True immediately after the
material creation (for the variable mat) so subsequent accesses like
mat.node_tree.nodes and mat.node_tree.links won’t raise AttributeError; apply
the same change for the second material creation site (the other
bpy.data.materials.new(...) usage near the later block) to ensure both node
trees are available.
public/downloads/modelforge-addon.py (3)

1033-1108: 🛠️ Refactor suggestion | 🟠 Major

Duplicate shader node creation — first pass nodes are orphaned by second pass.

The for map_type, image in texture_images.items() loop (lines 1033–1053) creates ShaderNodeNormalMap and ShaderNodeDisplacement nodes and connects them. Then the "second pass" (lines 1056–1108) creates new instances of those same node types and reconnects the same BSDF inputs, leaving the first-pass nodes orphaned in the node tree.

Remove the normal/displacement handling from the first pass (lines 1040–1052), since the second pass handles it more thoroughly (including ARM and AO logic).

Proposed fix — remove duplicate handling from first pass
                 # Connect to appropriate input on Principled BSDF
                 if map_type.lower() in ['color', 'diffuse', 'albedo']:
                     links.new(tex_node.outputs['Color'], principled.inputs['Base Color'])
                 elif map_type.lower() in ['roughness', 'rough']:
                     links.new(tex_node.outputs['Color'], principled.inputs['Roughness'])
                 elif map_type.lower() in ['metallic', 'metalness', 'metal']:
                     links.new(tex_node.outputs['Color'], principled.inputs['Metallic'])
-                elif map_type.lower() in ['normal', 'nor', 'dx', 'gl']:
-                    # Add normal map node
-                    normal_map = nodes.new(type='ShaderNodeNormalMap')
-                    normal_map.location = (x_pos + 200, y_pos)
-                    links.new(tex_node.outputs['Color'], normal_map.inputs['Color'])
-                    links.new(normal_map.outputs['Normal'], principled.inputs['Normal'])
-                elif map_type.lower() in ['displacement', 'disp', 'height']:
-                    # Add displacement node
-                    disp_node = nodes.new(type='ShaderNodeDisplacement')
-                    disp_node.location = (x_pos + 200, y_pos - 200)
-                    disp_node.inputs['Scale'].default_value = 0.1  # Reduce displacement strength
-                    links.new(tex_node.outputs['Color'], disp_node.inputs['Height'])
-                    links.new(disp_node.outputs['Displacement'], output.inputs['Displacement'])
+                # Normal, displacement, AO, and ARM are handled in the second pass below
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@public/downloads/modelforge-addon.py` around lines 1033 - 1108, The
first-pass loop that iterates texture_images (the for map_type, image in
texture_images.items() block) must not create ShaderNodeNormalMap or
ShaderNodeDisplacement or link them to principled/output because the second-pass
already creates and wires normal and displacement nodes; remove the branches
handling map_type.lower() in ['normal','nor','dx','gl'] and
['displacement','disp','height'] (the creation of
ShaderNodeNormalMap/ShaderNodeDisplacement and the links.new calls) from that
first-pass, leaving only the color/roughness/metallic handling and the y_pos
decrement so the second-pass code (which creates normal_map_node and disp_node
and links them) is the sole place those special nodes are instantiated and
connected.

462-536: ⚠️ Potential issue | 🔴 Critical

UnboundLocalError in except handler if an early exception occurs.

return_base64 is first assigned at line 478 inside the try block. If an exception fires before that point (e.g., bpy.context.screen is None), the except handler at line 531 references an unbound name, raising UnboundLocalError and masking the real error.

Also, filepath may still hold the caller-supplied None at that point, so the os.remove(filepath) call would also fail.

🐛 Proposed fix — initialize sentinel values before the try block
         import os
         import tempfile
         import base64
 
+        return_base64 = filepath is None
+        tmp_created = False
+
         try:
             # Find the active 3D viewport
             area = None
             for a in bpy.context.screen.areas:
                 if a.type == 'VIEW_3D':
                     area = a
                     break
 
             if not area:
                 return {"error": "No 3D viewport found"}
 
             # Determine file path — use temp if none provided
-            return_base64 = filepath is None
             if return_base64:
                 tmp = tempfile.NamedTemporaryFile(suffix=f".{format}", delete=False)
                 filepath = tmp.name
                 tmp.close()
+                tmp_created = True
 
             # ... rest of try ...
 
         except Exception as e:
             # Clean up temp file on error
-            if return_base64 and filepath:
+            if tmp_created and filepath:
                 try:
                     os.remove(filepath)
                 except OSError:
                     pass
             return {"error": str(e)}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@public/downloads/modelforge-addon.py` around lines 462 - 536, Initialize the
sentinel variables before the try block to avoid UnboundLocalError: set
return_base64 = False and ensure filepath (and any tmp-related variable) is
defined (e.g., filepath = None) before entering the try; inside the except
handler, only attempt os.remove if filepath is not None and
os.path.exists(filepath) to avoid removing a None or non-existent path. Update
references to return_base64 and filepath in the try/except and cleanup code (the
variables named return_base64 and filepath and the except block that currently
removes the temp file) so the except handler never references uninitialized
names and only deletes real files.

1271-1278: ⚠️ Potential issue | 🟠 Major

Error returns a plain string instead of an error dict — client sees "status": "success".

The _execute_command_internal handler wraps the return value as {"status": "success", "result": <value>}. Returning a bare string here means the client receives a "success" response containing an error message. This applies to create_rodin_job, poll_rodin_job_status, and import_generated_asset (line 1450).

Additionally, the f"..." has no placeholders (Ruff F541).

🐛 Proposed fix (apply same pattern at lines 1347, 1450)
             case _:
-                return f"Error: Unknown Hyper3D Rodin mode!"
+                return {"error": "Unknown Hyper3D Rodin mode!"}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@public/downloads/modelforge-addon.py` around lines 1271 - 1278, The
create_rodin_job function currently returns a plain string (and uses an
unnecessary f-string) on unknown mode which causes the RPC wrapper to mark it as
success; change the fallback to return an error dict consistent with other
handlers (e.g. {"status":"error", "error": "Unknown Hyper3D Rodin mode"}),
remove the f-string, and apply the same pattern to poll_rodin_job_status and
import_generated_asset so they also return an error dict (not a bare string)
when given an unknown mode.
lib/ai/index.ts (1)

1-9: ⚠️ Potential issue | 🟡 Minor

Stale module docstring — still references "Gemini 2.5 Pro".

Line 5 reads "Gemini 2.5 Pro for LLM generation" but DEFAULT_MODEL is now gemini-3.1-pro-preview.

📝 Proposed fix
- * - Gemini 2.5 Pro for LLM generation
+ * - Gemini 3.1 Pro (preview) for LLM generation
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@lib/ai/index.ts` around lines 1 - 9, Update the module docstring in
lib/ai/index.ts to reflect the current default LLM by replacing the outdated
"Gemini 2.5 Pro for LLM generation" text with "gemini-3.1-pro-preview" (or a
short phrase referencing the DEFAULT_MODEL constant) so the header matches the
actual DEFAULT_MODEL value; search for the top-level comment block and the
DEFAULT_MODEL symbol to ensure consistency between documentation and code.
lib/stripe.ts (1)

4-4: 🧹 Nitpick | 🔵 Trivial

Consider upgrading the pinned Stripe API version.

The current Stripe API version is 2026-01-28.clover. The code is pinned to 2025-02-24.acacia, which is the last monthly version in the prior major release family. Starting with the 2024-09-30.acacia release, Stripe follows a new API release process; twice a year, a new release (e.g., "clover") is issued with breaking changes. Upgrading to the clover release ensures access to the latest features and types.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@lib/stripe.ts` at line 4, Update the pinned Stripe API version string used
when creating the Stripe client: replace the old apiVersion value
('2025-02-24.acacia') with the current major release value ('2026-01-28.clover')
wherever the apiVersion option is set (look for the apiVersion key in the Stripe
client constructor/config). After changing the string, run any Stripe-related
type checks and integration tests and adjust any call sites or types that break
due to the new release's breaking changes.
GEMINI.md (1)

466-517: ⚠️ Potential issue | 🟡 Minor

Duplicate and stale sections — the bottom half of the file is an older copy.

Lines 466–567 duplicate the "Agent Rules", "Progress Tracking", "Session Log", and "Key Files Reference" sections that already exist (and are more up-to-date) in Lines 72–150. Notable inconsistencies between them:

  • Line 469 references Claude.md (the old filename), while the top uses GEMINI.md.
  • Line 501 says "Next.js 15" vs. Line 15 says "Next.js 16".
  • Line 502 says "NextAuth v5" while Line 18 says "Supabase Auth (NextAuth fully removed)".

Remove the duplicate bottom sections to avoid drift and confusion.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@GEMINI.md` around lines 466 - 517, Remove the duplicate outdated sections at
the bottom of GEMINI.md by deleting the repeated "Agent Rules", "Progress
Tracking", "Session Log", and "Key Files Reference" blocks (the copy that
references Claude.md, Next.js 15, and NextAuth v5), leaving only the up-to-date
top-half content; ensure references to "Claude.md" are changed back to
"GEMINI.md" if present and reconcile any remaining version mentions so the file
consistently uses the current values (e.g., Next.js 16 and Supabase Auth).
lib/orchestration/executor.ts (1)

694-694: ⚠️ Potential issue | 🔴 Critical

Bug: mat.use_nodes = True missing before accessing mat.node_tree in ensureDefaultMaterials.

The generated Python creates a new material with bpy.data.materials.new() and immediately accesses mat.node_tree.nodes.get('Principled BSDF') without calling mat.use_nodes = True first. In Blender 5.x, mat.node_tree is None until use_nodes is enabled — this will crash with an AttributeError. The prompts file (line 82, 98) explicitly documents this requirement.

Suggested fix
-    const script = `import bpy\n\nDEFAULT_NAME = "ModelForge_Default_Material"\nmat = bpy.data.materials.get(DEFAULT_NAME)\nif mat is None:\n    mat = bpy.data.materials.new(name=DEFAULT_NAME)\n    bsdf = mat.node_tree.nodes.get('Principled BSDF')\n    if bsdf:\n        bsdf.inputs['Base Color'].default_value = (0.85, 0.82, 0.78, 1.0)\n        bsdf.inputs['Roughness'].default_value = 0.4\n\nfor obj_name in [${pythonList}]:\n    obj = bpy.data.objects.get(obj_name)\n    if not obj or obj.type != 'MESH':\n        continue\n    if not obj.data.materials:\n        obj.data.materials.append(mat)\n    else:\n        obj.data.materials[0] = mat\n`
+    const script = `import bpy\n\nDEFAULT_NAME = "ModelForge_Default_Material"\nmat = bpy.data.materials.get(DEFAULT_NAME)\nif mat is None:\n    mat = bpy.data.materials.new(name=DEFAULT_NAME)\n    mat.use_nodes = True\n    bsdf = mat.node_tree.nodes.get('Principled BSDF')\n    if bsdf:\n        bsdf.inputs['Base Color'].default_value = (0.85, 0.82, 0.78, 1.0)\n        bsdf.inputs['Roughness'].default_value = 0.4\n\nfor obj_name in [${pythonList}]:\n    obj = bpy.data.objects.get(obj_name)\n    if not obj or obj.type != 'MESH':\n        continue\n    if not obj.data.materials:\n        obj.data.materials.append(mat)\n    else:\n        obj.data.materials[0] = mat\n`

The diff adds mat.use_nodes = True\n after bpy.data.materials.new(name=DEFAULT_NAME).

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@lib/orchestration/executor.ts` at line 694, The generated Blender Python
script in executor.ts (the `script` string used by ensureDefaultMaterials)
creates a new material via `bpy.data.materials.new(name=DEFAULT_NAME)` and then
accesses `mat.node_tree` without enabling nodes; set `mat.use_nodes = True`
immediately after creating the material to ensure `mat.node_tree` is available
before calling `mat.node_tree.nodes.get('Principled BSDF')`, updating the
`script` string in the `ensureDefaultMaterials`/executor.ts location
accordingly.

Comment on lines +314 to +324
def set_camera_looking_slightly_up(camera_obj):
"""CORRECT — X=95° means looking 5° above horizontal."""
camera_obj.rotation_euler = (math.radians(95), 0, 0)

def set_camera_looking_slightly_down(camera_obj):
"""CORRECT — X=80° means looking 10° below horizontal."""
camera_obj.rotation_euler = (math.radians(80), 0, 0)

def set_camera_looking_horizontal(camera_obj):
"""CORRECT — X=90° = perfectly horizontal."""
camera_obj.rotation_euler = (math.radians(90), 0, 0)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Labeling these as "CORRECT" while zeroing Y/Z may mislead readers

Each function writes the entire rotation_euler tuple with Y=0, Z=0, which silently destroys any existing camera heading and roll. This is fine as a self-contained illustration of X-axis direction, but calling them "CORRECT" without qualification could teach the AI model to always zero out the other axes. A brief note or changing to mutate only rotation_euler[0] would make the intent clearer.

🛡️ Alternative: mutate only the X component
 def set_camera_looking_slightly_up(camera_obj):
-    """CORRECT — X=95° means looking 5° above horizontal."""
-    camera_obj.rotation_euler = (math.radians(95), 0, 0)
+    """CORRECT — X=95° means looking 5° above horizontal (preserves Y/Z heading)."""
+    camera_obj.rotation_euler[0] = math.radians(95)

 def set_camera_looking_slightly_down(camera_obj):
-    """CORRECT — X=80° means looking 10° below horizontal."""
-    camera_obj.rotation_euler = (math.radians(80), 0, 0)
+    """CORRECT — X=80° means looking 10° below horizontal (preserves Y/Z heading)."""
+    camera_obj.rotation_euler[0] = math.radians(80)

 def set_camera_looking_horizontal(camera_obj):
-    """CORRECT — X=90° = perfectly horizontal."""
-    camera_obj.rotation_euler = (math.radians(90), 0, 0)
+    """CORRECT — X=90° = perfectly horizontal (preserves Y/Z heading)."""
+    camera_obj.rotation_euler[0] = math.radians(90)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@data/blender-scripts/blender_api_pitfalls.py` around lines 314 - 324, The
three helper functions set_camera_looking_slightly_up,
set_camera_looking_slightly_down, and set_camera_looking_horizontal overwrite
the entire rotation_euler tuple (zeroing Y and Z) which unintentionally clears
heading/roll; either update their docstrings to warn that Y and Z are
preserved/that these examples intentionally reset other axes, or change the
implementations to only set the X component (e.g., mutate rotation_euler[0] to
math.radians(...)) so Y/Z remain unchanged and existing heading/roll are not
lost.

Comment on lines +76 to +82
def cleanup_neural_mesh(
obj,
fix_normals: bool = True,
remove_doubles: bool = True,
merge_distance: float = 0.0001,
fill_holes: bool = True
):
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Boolean positional arguments should be keyword-only (Ruff FBT001/FBT002/FBT003)

Multiple function signatures accept boolean flags as positional arguments (fix_normals, remove_doubles, fill_holes, ground_to_floor, cleanup, auto_uv, setup_pbr). Per Ruff FBT001/FBT002, these invite silent call-site errors (e.g., cleanup_neural_mesh(obj, False, True, 0.001) where the meaning of each False/True is opaque). Making them keyword-only (* separator) aligns with Ruff's guidance and improves readability.

♻️ Proposed refactor (example for cleanup_neural_mesh)
 def cleanup_neural_mesh(
     obj,
+    *,
     fix_normals: bool = True,
     remove_doubles: bool = True,
     merge_distance: float = 0.0001,
     fill_holes: bool = True
 ):

Apply the same * separator to normalize_neural_mesh, auto_uv_neural_mesh, and full_neural_import_pipeline, then update the internal call at line 95 to use a keyword argument:

-    obj.select_set(True)
+    obj.select_set(state=True)

Also applies to: 127-131, 197-201, 282-292

🧰 Tools
🪛 Ruff (0.15.1)

[error] 78-78: Boolean-typed positional argument in function definition

(FBT001)


[error] 78-78: Boolean default positional argument in function definition

(FBT002)


[error] 79-79: Boolean-typed positional argument in function definition

(FBT001)


[error] 79-79: Boolean default positional argument in function definition

(FBT002)


[error] 81-81: Boolean-typed positional argument in function definition

(FBT001)


[error] 81-81: Boolean default positional argument in function definition

(FBT002)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@data/blender-scripts/import_neural_mesh.py` around lines 76 - 82, Update the
function signatures to make boolean flags keyword-only by adding a
positional-only separator (*) before those bool parameters: change
cleanup_neural_mesh(obj, fix_normals: bool = True, remove_doubles: bool = True,
merge_distance: float = 0.0001, fill_holes: bool = True) and the signatures for
normalize_neural_mesh, auto_uv_neural_mesh, and full_neural_import_pipeline so
flags like fix_normals, remove_doubles, fill_holes, ground_to_floor, cleanup,
auto_uv, and setup_pbr are after a *; then update any internal call sites
(notably the call to cleanup_neural_mesh at the previously referenced internal
call) to pass those values as keyword arguments (e.g., cleanup_neural_mesh(obj,
fix_normals=True, ...) ) so calls remain unambiguous.

Comment on lines +94 to +120
bpy.context.view_layer.objects.active = obj
obj.select_set(True)
bpy.ops.object.mode_set(mode='EDIT')
bpy.ops.mesh.select_all(action='SELECT')

# Remove loose geometry
bpy.ops.mesh.delete_loose(use_verts=True, use_edges=True, use_faces=False)

# Merge by distance (remove duplicate vertices)
if remove_doubles:
bpy.ops.mesh.remove_doubles(threshold=merge_distance)

# Recalculate normals (neural meshes often have flipped faces)
if fix_normals:
bpy.ops.mesh.normals_make_consistent(inside=False)

# Fill holes (non-manifold boundaries)
if fill_holes:
bpy.ops.mesh.select_all(action='DESELECT')
bpy.ops.mesh.select_non_manifold()
try:
bpy.ops.mesh.fill()
except RuntimeError:
pass # Some non-manifold edges can't be filled

bpy.ops.object.mode_set(mode='OBJECT')
print(f"Cleaned {obj.name}: {len(obj.data.vertices)} verts, {len(obj.data.polygons)} faces")
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

No try/finally guard — any exception leaves the object stranded in EDIT mode

mode_set(mode='EDIT') is called at line 96 and restored at line 119, but only bpy.ops.mesh.fill() is wrapped in a try/except. Any RuntimeError raised by delete_loose, remove_doubles/merge_by_distance, normals_make_consistent, or select_non_manifold will propagate out while the object remains in EDIT mode, breaking every subsequent operator call in the pipeline. The same pattern repeats in auto_uv_neural_mesh.

🛡️ Proposed fix
     bpy.context.view_layer.objects.active = obj
     obj.select_set(True)
     bpy.ops.object.mode_set(mode='EDIT')
-    bpy.ops.mesh.select_all(action='SELECT')
-
-    # Remove loose geometry
-    bpy.ops.mesh.delete_loose(use_verts=True, use_edges=True, use_faces=False)
-
-    # Merge by distance (remove duplicate vertices)
-    if remove_doubles:
-        bpy.ops.mesh.merge_by_distance(distance=merge_distance)
-
-    # Recalculate normals (neural meshes often have flipped faces)
-    if fix_normals:
-        bpy.ops.mesh.normals_make_consistent(inside=False)
-
-    # Fill holes (non-manifold boundaries)
-    if fill_holes:
-        bpy.ops.mesh.select_all(action='DESELECT')
-        bpy.ops.mesh.select_non_manifold()
-        try:
-            bpy.ops.mesh.fill()
-        except RuntimeError:
-            pass  # Some non-manifold edges can't be filled
-
-    bpy.ops.object.mode_set(mode='OBJECT')
+    try:
+        bpy.ops.mesh.select_all(action='SELECT')
+        bpy.ops.mesh.delete_loose(use_verts=True, use_edges=True, use_faces=False)
+        if remove_doubles:
+            bpy.ops.mesh.merge_by_distance(distance=merge_distance)
+        if fix_normals:
+            bpy.ops.mesh.normals_make_consistent(inside=False)
+        if fill_holes:
+            bpy.ops.mesh.select_all(action='DESELECT')
+            bpy.ops.mesh.select_non_manifold()
+            try:
+                bpy.ops.mesh.fill()
+            except RuntimeError:
+                pass
+    finally:
+        bpy.ops.object.mode_set(mode='OBJECT')
🧰 Tools
🪛 Ruff (0.15.1)

[error] 95-95: Boolean positional value in function call

(FBT003)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@data/blender-scripts/import_neural_mesh.py` around lines 94 - 120, The block
that sets bpy.ops.object.mode_set(mode='EDIT') (and the similar sequence in
auto_uv_neural_mesh) must be protected with a try/finally so the mode is always
restored to 'OBJECT' even if bpy.ops.mesh.delete_loose,
bpy.ops.mesh.remove_doubles (merge_by_distance),
bpy.ops.mesh.normals_make_consistent, bpy.ops.mesh.select_non_manifold, or other
mesh operators raise an exception; modify the code around the EDIT-mode section
(the sequence starting with bpy.context.view_layer.objects.active = obj /
obj.select_set(True) / bpy.ops.object.mode_set(mode='EDIT')) to perform the mesh
operations inside a try block and call bpy.ops.object.mode_set(mode='OBJECT') in
a finally block (also ensure any temporary selections are cleaned up), and apply
the same try/finally guard to the auto_uv_neural_mesh routine to avoid leaving
objects stranded in EDIT mode.

Comment on lines +155 to +158
# Ground to floor (bottom of bounding box at z=0)
if ground_to_floor:
bbox_min_z = min(v.co.z for v in obj.data.vertices)
obj.location.z -= bbox_min_z
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Python vertex loop for floor grounding is O(n) on meshes with 100k–500k faces

The script's own docstring calls out 100k–500k+ face counts. Iterating obj.data.vertices in Python at that scale is noticeably slow. obj.bound_box gives the pre-computed 8-corner bounding box in local space — use min(v[2] for v in obj.bound_box) for the same result at O(1) cost.

⚡ Proposed fix
-        bbox_min_z = min(v.co.z for v in obj.data.vertices)
+        bbox_min_z = min(v[2] for v in obj.bound_box)
         obj.location.z -= bbox_min_z
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@data/blender-scripts/import_neural_mesh.py` around lines 155 - 158, Replace
the O(n) vertex loop that computes bbox_min_z by iterating obj.data.vertices
with a constant-time read of the precomputed bounding box: when ground_to_floor
is true, compute bbox_min_z = min(c[2] for c in obj.bound_box) (or equivalent)
and then adjust obj.location.z -= bbox_min_z; update references to
obj.data.vertices and bbox_min_z accordingly to avoid iterating large meshes.

Comment on lines +206 to +226
bpy.context.view_layer.objects.active = obj
obj.select_set(True)
bpy.ops.object.mode_set(mode='EDIT')
bpy.ops.mesh.select_all(action='SELECT')

if method == 'SMART':
bpy.ops.uv.smart_project(
angle_limit=math.radians(66),
island_margin=0.02,
area_weight=0.0,
scale_to_bounds=True
)
elif method == 'LIGHTMAP':
bpy.ops.uv.lightmap_pack(
PREF_CONTEXT='ALL_FACES',
PREF_PACK_IN_ONE=True,
PREF_BOX_DIV=12,
PREF_MARGIN_DIV=0.2
)

bpy.ops.object.mode_set(mode='OBJECT')
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

No try/finally guard — exception during UV unwrap leaves object in EDIT mode

Same issue as cleanup_neural_mesh: mode_set(mode='EDIT') at line 208 with no finally block, so any RuntimeError from smart_project or lightmap_pack traps the object in EDIT mode.

🛡️ Proposed fix
     bpy.context.view_layer.objects.active = obj
     obj.select_set(True)
     bpy.ops.object.mode_set(mode='EDIT')
-    bpy.ops.mesh.select_all(action='SELECT')
-
-    if method == 'SMART':
-        bpy.ops.uv.smart_project(...)
-    elif method == 'LIGHTMAP':
-        bpy.ops.uv.lightmap_pack(...)
-
-    bpy.ops.object.mode_set(mode='OBJECT')
+    try:
+        bpy.ops.mesh.select_all(action='SELECT')
+        if method == 'SMART':
+            bpy.ops.uv.smart_project(
+                angle_limit=math.radians(66),
+                island_margin=0.02,
+                area_weight=0.0,
+                scale_to_bounds=True
+            )
+        elif method == 'LIGHTMAP':
+            bpy.ops.uv.lightmap_pack(
+                PREF_CONTEXT='ALL_FACES',
+                PREF_PACK_IN_ONE=True,
+                PREF_BOX_DIV=12,
+                PREF_MARGIN_DIV=0.2
+            )
+        else:
+            print(f"[auto_uv_neural_mesh] Unknown UV method '{method}', skipping.")
+    finally:
+        bpy.ops.object.mode_set(mode='OBJECT')
🧰 Tools
🪛 Ruff (0.15.1)

[error] 207-207: Boolean positional value in function call

(FBT003)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@data/blender-scripts/import_neural_mesh.py` around lines 206 - 226, The
UV-unwrapping code sets the object to EDIT mode
(bpy.ops.object.mode_set(mode='EDIT')) then calls bpy.ops.uv.smart_project /
bpy.ops.uv.lightmap_pack without a finally guard, so an exception leaves the
object stuck in EDIT mode; wrap the mode switch and the conditional unwrap calls
in a try/finally (or context manager) so that
bpy.ops.object.mode_set(mode='OBJECT') is always executed in the finally block
(also ensure obj.select_set(True) and bpy.context.view_layer.objects.active =
obj remain inside the try), referencing the existing bpy.ops.object.mode_set,
bpy.ops.uv.smart_project, and bpy.ops.uv.lightmap_pack symbols to locate where
to add the try/finally.

f"https://queue.fal.run/fal-ai/hyper3d/requests/{request_id}/status",
headers={
"Authorization": f"Key {bpy.context.scene.blendermcp_hyper3d_api_key}",
"Authorization": f"KEY {bpy.context.scene.blendermcp_hyper3d_api_key}",
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

cat public/downloads/modelforge-addon.py | sed -n '1320,1330p'

Repository: Ker102/ModelForge

Length of output: 537


🏁 Script executed:

cat public/downloads/modelforge-addon.py | sed -n '1360,1375p'

Repository: Ker102/ModelForge

Length of output: 605


🏁 Script executed:

cat public/downloads/modelforge-addon.py | sed -n '1510,1530p'

Repository: Ker102/ModelForge

Length of output: 794


🌐 Web query:

fal.ai API authentication Authorization header format documentation

💡 Result:

fal.ai’s docs specify API-key auth via the Authorization header using the Key prefix:

  • Header format: Authorization: Key YOUR_API_KEY (docs.fal.ai)
  • Admin-scope endpoints use the same format (just an Admin-scope key): Authorization: Key YOUR_ADMIN_API_KEY (docs.fal.ai)

Citations:


Fix authorization header prefix casing at line 1370: "KEY" should be "Key".

The poll_rodin_job_status_fal_ai method uses "KEY {api_key}" while create_rodin_job_fal_ai (line 1330) and import_generated_asset_fal_ai (line 1523) both use "Key {api_key}". fal.ai's API documentation specifies the exact format as Authorization: Key YOUR_API_KEY (case-sensitive), so this inconsistency will cause 401 authentication failures.

🐛 Proposed fix
-                "Authorization": f"KEY {bpy.context.scene.blendermcp_hyper3d_api_key}",
+                "Authorization": f"Key {bpy.context.scene.blendermcp_hyper3d_api_key}",
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@public/downloads/modelforge-addon.py` at line 1370, The Authorization header
in poll_rodin_job_status_fal_ai is using the wrong prefix casing ("KEY") causing
401s; update the header construction to use "Key
{bpy.context.scene.blendermcp_hyper3d_api_key}" to match create_rodin_job_fal_ai
and import_generated_asset_fal_ai and fal.ai's docs, ensuring the Authorization
value is case-sensitive "Key" instead of "KEY".

Comment on lines +1619 to +1621
if enabled and api_key:
return {"enabled": True, "message": "Sketchfab integration is enabled and ready to use."}
elif enabled and not api_key:
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Dead code — if enabled and api_key is unreachable.

When api_key is truthy, the function always returns from the if api_key: block (lines 1584–1617). Lines 1619–1620 can never execute.

Proposed cleanup
-        if enabled and api_key:
-            return {"enabled": True, "message": "Sketchfab integration is enabled and ready to use."}
-        elif enabled and not api_key:
+        if enabled and not api_key:
             return {
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
if enabled and api_key:
return {"enabled": True, "message": "Sketchfab integration is enabled and ready to use."}
elif enabled and not api_key:
if enabled and not api_key:
return {
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@public/downloads/modelforge-addon.py` around lines 1619 - 1621, The branch
"if enabled and api_key" is dead because the earlier "if api_key:" block always
returns when api_key is truthy; remove the unreachable condition (or merge its
intent) and simplify the function by keeping the single "if api_key:" handling
and the subsequent "elif enabled and not api_key" branch, or move the
enabled/api_key checks into a single conditional that returns the correct dict;
look for the function containing the "if api_key:" block and the variables
enabled and api_key and remove or refactor the redundant "if enabled and
api_key" branch.


import http from "http"

const PORT = Number(process.env.MOCK_NEURAL_PORT || 8090)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Number() on an invalid env-var string silently produces NaN.

Number("invalid")NaN; server.listen(NaN) is undefined behaviour in Node.js — in practice it may bind to an OS-assigned port (port 0 semantics) with no warning, silently ignoring MOCK_NEURAL_PORT.

🛡️ Proposed fix with validation
-const PORT = Number(process.env.MOCK_NEURAL_PORT || 8090)
+const rawPort = process.env.MOCK_NEURAL_PORT
+const PORT = rawPort ? parseInt(rawPort, 10) : 8090
+if (isNaN(PORT) || PORT < 1 || PORT > 65535) {
+    console.error(`[mock-neural] Invalid MOCK_NEURAL_PORT: "${rawPort}"`)
+    process.exit(1)
+}
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
const PORT = Number(process.env.MOCK_NEURAL_PORT || 8090)
const rawPort = process.env.MOCK_NEURAL_PORT
const PORT = rawPort ? parseInt(rawPort, 10) : 8090
if (isNaN(PORT) || PORT < 1 || PORT > 65535) {
console.error(`[mock-neural] Invalid MOCK_NEURAL_PORT: "${rawPort}"`)
process.exit(1)
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/mock-neural-server.ts` at line 15, The code uses
Number(process.env.MOCK_NEURAL_PORT || 8090) which yields NaN for invalid
strings and can cause server.listen(NaN) to behave unpredictably; change the
logic that computes the PORT constant (referencing process.env.MOCK_NEURAL_PORT
and PORT) to explicitly parse and validate the env var (e.g. parseInt or Number
then isFinite check), log or throw on invalid values, and fallback to the
default 8090 only when the parsed value is a valid positive integer before
calling server.listen so the server never receives NaN.

Comment on lines +149 to +165
const chunks: Buffer[] = []
req.on("data", (chunk: Buffer) => chunks.push(chunk))
req.on("end", () => {
const body = Buffer.concat(chunks).toString("utf-8")
console.log(`[mock-neural] /generate request received (${body.length} bytes)`)

// Simulate ~1s processing time
setTimeout(() => {
res.writeHead(200, {
"Content-Type": "model/gltf-binary",
"Content-Length": String(MOCK_GLB.length),
})
res.end(MOCK_GLB)
console.log(`[mock-neural] Returned ${MOCK_GLB.length} byte GLB`)
}, 1000)
})
return
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Missing req.on("error") handler — aborted connections crash the process.

In Node.js, an unhandled error event on an EventEmitter throws synchronously, terminating the process. If the client disconnects before the request body is fully received, the IncomingMessage stream emits error (typically ECONNRESET). This is especially likely during automated tests that don't always drain the response.

🛡️ Proposed fix
     const chunks: Buffer[] = []
     req.on("data", (chunk: Buffer) => chunks.push(chunk))
+    req.on("error", (err) => {
+        console.error(`[mock-neural] Request error: ${err.message}`)
+        // Response may already be gone — attempt a 500 if headers not yet sent
+        if (!res.headersSent) {
+            res.writeHead(500, { "Content-Type": "application/json" })
+            res.end(JSON.stringify({ error: "Request error" }))
+        }
+    })
     req.on("end", () => {
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/mock-neural-server.ts` around lines 149 - 165, The request handler
that collects body chunks (the block using const chunks: Buffer[] and
req.on("data"/"end")) lacks an error listener, so aborted connections can emit
an unhandled 'error' on req and crash the process; add a req.on("error", (err)
=> { /* log and early return/cleanup */ }) next to the existing listeners to log
the error (include err.message), stop further processing, and ensure you don't
call res.end(MOCK_GLB) when the request has errored or been aborted; optionally
also attach a res.on("error") to catch write errors when sending MOCK_GLB.

Comment on lines +173 to +177
server.listen(PORT, () => {
console.log(`\n🧠 Mock Neural Server running on http://localhost:${PORT}`)
console.log(` GET /health → health check`)
console.log(` POST /generate → returns minimal GLB cube mesh\n`)
})
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Missing server.on("error") handler — EADDRINUSE crashes the process.

If PORT is already bound, Node.js emits an error event on the server. Without a listener, this is an uncaught exception that terminates the process immediately, with no user-friendly message.

🛡️ Proposed fix
 server.listen(PORT, () => {
     console.log(`\n🧠 Mock Neural Server running on http://localhost:${PORT}`)
     console.log(`   GET  /health   → health check`)
     console.log(`   POST /generate → returns minimal GLB cube mesh\n`)
 })
+
+server.on("error", (err: NodeJS.ErrnoException) => {
+    if (err.code === "EADDRINUSE") {
+        console.error(`[mock-neural] Port ${PORT} is already in use. Set MOCK_NEURAL_PORT to a different port.`)
+    } else {
+        console.error(`[mock-neural] Server error: ${err.message}`)
+    }
+    process.exit(1)
+})
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
server.listen(PORT, () => {
console.log(`\n🧠 Mock Neural Server running on http://localhost:${PORT}`)
console.log(` GET /health → health check`)
console.log(` POST /generate → returns minimal GLB cube mesh\n`)
})
server.listen(PORT, () => {
console.log(`\n🧠 Mock Neural Server running on http://localhost:${PORT}`)
console.log(` GET /health → health check`)
console.log(` POST /generate → returns minimal GLB cube mesh\n`)
})
server.on("error", (err: NodeJS.ErrnoException) => {
if (err.code === "EADDRINUSE") {
console.error(`[mock-neural] Port ${PORT} is already in use. Set MOCK_NEURAL_PORT to a different port.`)
} else {
console.error(`[mock-neural] Server error: ${err.message}`)
}
process.exit(1)
})
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/mock-neural-server.ts` around lines 173 - 177, The server currently
calls server.listen(PORT, ...) without an error listener, so an EADDRINUSE will
crash the process; add server.on("error", (err) => { if (err && err.code ===
"EADDRINUSE") log a clear message including PORT and exit non‑zero
(process.exit(1)); otherwise log the error and rethrow or exit } ) adjacent to
the existing server.listen call to gracefully handle port-in-use and other
server errors; reference the existing server variable, PORT constant, and the
server.listen callback when adding this handler.

@Ker102 Ker102 merged commit 54b78a9 into main Feb 21, 2026
6 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

backend configuration dependencies Pull requests that update a dependency file desktop documentation Improvements or additions to documentation frontend scripts

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant