Skip to content

Conversation

@carlosgjs
Copy link
Collaborator

@carlosgjs carlosgjs commented Dec 17, 2025

Summary

This pull request introduces a new API endpoint to support the registration of ML pipelines for a project, primarily for integration with V2 ML processing services. It also changes the ProcessingService.endpoint_url to be nullable.

New API endpoint for pipeline registration:

  • Added a new pipelines POST action to the project viewset in ami/main/api/views.py, allowing V2 ML processing services to register available pipelines for a project. The endpoint parses the payload using the AsyncPipelineRegistrationRequest.

Closes #1086

Checklist

  • I have tested these changes appropriately.
  • I have added and/or modified relevant tests.
  • I updated relevant documentation or comments.
  • I have verified that this PR follows the project's coding standards.
  • Any dependent changes have already been merged to main.

Summary by CodeRabbit

  • New Features

    • New project "pipelines" endpoint to register ML pipelines and associate processing services.
    • Processing services can run in pull mode (no endpoint configured); status and pipeline responses handle this gracefully.
    • Processing service records now accept an optional endpoint and track last-checked latency.
  • Tests

    • Added tests for pipeline registration flows, association/conflict cases, auth/validation errors, and pull-mode behavior.
  • Chores

    • DB migration to allow nullable processing service endpoint.

✏️ Tip: You can customize this high-level summary in your review settings.

@netlify
Copy link

netlify bot commented Dec 17, 2025

Deploy Preview for antenna-preview canceled.

Name Link
🔨 Latest commit 477aab8
🔍 Latest deploy log https://app.netlify.com/projects/antenna-preview/deploys/6973efb037497200085df93d

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Dec 17, 2025

Note

Other AI code review bot(s) detected

CodeRabbit has detected other AI code review bot(s) in this pull request and will avoid duplicating their findings in the review comments. This may lead to a less comprehensive review.

📝 Walkthrough

Walkthrough

Adds a POST endpoint to register pipelines per project, makes ProcessingService.endpoint_url nullable for pull-mode services, updates schemas, tasks, migration and tests to support pipeline registration and null-endpoint behavior.

Changes

Cohort / File(s) Summary
API Endpoint Implementation
ami/main/api/views.py
Adds pipelines(self, request, pk=None) on ProjectViewSet: parses AsyncPipelineRegistrationRequest, handles ValidationError, ensures/creates a ProcessingService (dummy if needed), calls ProcessingService.create_pipelines scoped to the project, and returns PipelineRegistrationResponse.
Permission & Authorization
ami/main/models.py
Extends check_custom_permission to allow the "pipelines" action for ProjectManager/owner/superuser.
Core Model Updates
ami/ml/models/processing_service.py
Makes endpoint_url nullable/blank, adds last_checked_latency field, adjusts __str__, get_status, and get_pipeline_configs to short-circuit when endpoint_url is null.
Database Migration
ami/ml/migrations/0026_make_processing_service_endpoint_url_nullable.py
Alters ProcessingService.endpoint_url to CharField(max_length=1024, null=True, blank=True).
Schema & Serialization
ami/ml/schemas.py
Makes ProcessingServiceStatusResponse.endpoint_url optional and adds AsyncPipelineRegistrationRequest Pydantic model (processing_service_name, pipelines).
Background Task Updates
ami/ml/tasks.py
check_processing_services_online now skips services with no endpoint_url and logs a warning.
API Tests
ami/main/tests.py
Adds TestProjectPipelinesAPI with tests for creating new service, conflict on already-associated service, associating existing unlinked service, unauthorized access (403), and invalid payload (400).
Model Tests
ami/ml/tests.py
Adds tests verifying creation and status/config behavior for ProcessingService with endpoint_url=None (pull mode).
Requirements Comment
requirements/base.txt
Adds a commented note about non-binary psycopg option for some platforms.

Sequence Diagram(s)

sequenceDiagram
    participant Client as Client
    participant ViewSet as ProjectViewSet
    participant Auth as Permissions
    participant Project as Project Model
    participant Service as ProcessingService
    participant DB as Database

    Client->>ViewSet: POST /projects/{id}/pipelines (payload)
    ViewSet->>Auth: check_custom_permission("pipelines")
    Auth-->>ViewSet: allowed
    ViewSet->>ViewSet: validate payload (AsyncPipelineRegistrationRequest)
    ViewSet->>Project: fetch project by id
    Project-->>ViewSet: project instance
    ViewSet->>Service: get_or_create by name (create dummy if missing)
    Service-->>ViewSet: service instance
    ViewSet->>Service: create_pipelines(pipelines, project=project)
    Service->>DB: persist pipeline configs / association
    Service-->>ViewSet: PipelineRegistrationResponse
    ViewSet-->>Client: return response (dict)
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~50 minutes

Suggested labels

backend, ml

Suggested reviewers

  • mihow

Poem

🐇
I hopped in with a payload bright,
Pipelines hummed in soft moonlight,
No endpoint? I listen and pull,
Linking services, pipes made full,
Async carrots for every site!

🚥 Pre-merge checks | ✅ 5
✅ Passed checks (5 passed)
Check name Status Explanation
Title check ✅ Passed The title clearly and concisely describes the main change: adding an endpoint to register pipelines, which matches the core functionality introduced.
Description check ✅ Passed The description covers the main changes and references issue #1086. However, it lacks detailed sections for testing instructions, deployment notes, and only partially addresses the template's structure.
Linked Issues check ✅ Passed The PR successfully implements the pipeline registration endpoint [#1086] with validation, ProcessingService handling, and comprehensive tests covering creation, association, error cases, and authorization.
Out of Scope Changes check ✅ Passed All changes are within scope: endpoint implementation, ProcessingService.endpoint_url nullability, proper handling of pull-mode services, and comprehensive tests directly support the linked issue objectives.
Docstring Coverage ✅ Passed Docstring coverage is 85.71% which is sufficient. The required threshold is 80.00%.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
  • 📝 Generate docstrings

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@carlosgjs carlosgjs changed the title RFC: V2 endpoint to register pipeliens RFC: V2 endpoint to register pipelines Jan 13, 2026
@carlosgjs carlosgjs requested a review from mihow January 13, 2026 18:09
@netlify
Copy link

netlify bot commented Jan 16, 2026

👷 Deploy request for antenna-ssec pending review.

Visit the deploys page to approve it

Name Link
🔨 Latest commit 477aab8

@carlosgjs carlosgjs marked this pull request as ready for review January 21, 2026 22:46
Copilot AI review requested due to automatic review settings January 21, 2026 22:46
@carlosgjs carlosgjs changed the title RFC: V2 endpoint to register pipelines PSV2: endpoint to register pipelines Jan 21, 2026
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🤖 Fix all issues with AI agents
In `@ami/main/api/views.py`:
- Around line 240-262: The current logic in the view returns a 400 when a
ProcessingService (found via
ProcessingService.objects.filter(name=parsed.processing_service_name).first())
is already associated with the project, which prevents idempotent
re-registration; update the branch handling the existing processing_service so
that if project is already in processing_service.projects.all() you do not
return Response(status=400) but simply continue (no-op) and allow the endpoint
to proceed (e.g., log an info/debug message) — ensure you keep the
processing_service.projects.add(project)/save() when association is missing, and
remove the early return that blocks subsequent pipeline registration.

In `@ami/ml/schemas.py`:
- Around line 326-327: The endpoint_url annotation is optional but currently
still required because it has no default; update the schema so endpoint_url has
an explicit default (e.g., set endpoint_url to None or use Field(default=None)
if this is a Pydantic model) so parsing succeeds when senders omit it—modify the
endpoint_url declaration (near the latency: float field) to include the default
None.

In `@ami/ml/tasks.py`:
- Around line 112-114: The current check only skips when service.endpoint_url is
None but still allows empty strings; update the conditional around
service.endpoint_url in the loop (the block that calls logger.warning and
continue) to treat None, empty, or whitespace-only values as missing — e.g.,
ensure you test both falsy and stripped emptiness (safe-check to avoid calling
.strip() on None) so that logger.warning(f"Processing service {service} has no
endpoint URL, skipping.") is used and the loop continues for None/"", or
whitespace-only endpoint_url.
🧹 Nitpick comments (1)
ami/main/tests.py (1)

3457-3459: Consider disabling default fixtures for these API tests.
create_defaults=True builds extra related objects these tests don’t use; turning it off keeps setup lean.

♻️ Suggested change
-        self.project = Project.objects.create(name="Test Project", owner=self.user, create_defaults=True)
-        self.other_project = Project.objects.create(name="Other Project", owner=self.other_user, create_defaults=True)
+        self.project = Project.objects.create(name="Test Project", owner=self.user, create_defaults=False)
+        self.other_project = Project.objects.create(name="Other Project", owner=self.other_user, create_defaults=False)

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This pull request introduces a new API endpoint for V2 ML processing services to register pipelines with projects in "pull mode" (without requiring an endpoint URL). The changes enable processing services to push pipeline configurations to Antenna rather than Antenna pulling them from a service endpoint.

Changes:

  • Added a new POST endpoint /api/v2/projects/{id}/pipelines/ for pipeline registration
  • Made ProcessingService.endpoint_url nullable to support pull-mode services
  • Updated ProcessingService methods to handle null endpoint URLs gracefully

Reviewed changes

Copilot reviewed 9 out of 9 changed files in this pull request and generated 8 comments.

Show a summary per file
File Description
ami/main/api/views.py Added new pipelines action to ProjectViewSet for receiving pipeline registrations from V2 services
ami/main/models.py Added permission check for "pipelines" action requiring ProjectManager role
ami/ml/models/processing_service.py Made endpoint_url nullable and updated get_status() and get_pipeline_configs() to handle null values
ami/ml/schemas.py Made endpoint_url nullable in ProcessingServiceStatusResponse and added AsyncPipelineRegistrationRequest schema
ami/ml/migrations/0026_make_processing_service_endpoint_url_nullable.py Database migration to make endpoint_url nullable
ami/ml/tasks.py Added check to skip processing services without endpoint URLs in periodic status check
ami/ml/tests.py Added comprehensive tests for null endpoint_url handling
ami/main/tests.py Added test suite for the new pipelines API endpoint
requirements/base.txt Added explanatory comment about psycopg binary vs non-binary versions

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Fix all issues with AI agents
In `@ami/ml/schemas.py`:
- Around line 339-345: Update the inaccurate docstring on
AsyncPipelineRegistrationRequest to reference AsyncPipelineRegistrationRequest
(not PipelineRegistrationResponse) and clearly state it represents the async
callback payload containing processing results; rename the output-oriented model
PipelineRegistrationResponse to a more generic name like
PipelineProcessingResult (or create a new PipelineProcessingResult model) and
adjust its fields (timestamp, success, error, pipelines_created,
algorithms_created) to be output/result-oriented (optional or excluded from
required input validation) so the
AsyncPipelineRegistrationRequest.pipeline_response type reflects a result object
rather than required caller-supplied output fields; update all references to
PipelineRegistrationResponse to the new name (or to the new model) including the
AsyncPipelineRegistrationRequest definition and related validation/serialization
logic.

carlosgjs and others added 3 commits January 21, 2026 15:15
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Fix all issues with AI agents
In `@ami/main/api/views.py`:
- Around line 232-233: Update the docstring in ami/main/api/views.py that
describes the AsyncPipelineRegistrationRequest payload: change the field name
from `pipelines_response` to the correct `pipeline_response` so it reads that
the list of PipelineConfigResponse objects is under the
`pipeline_response.pipelines` key; keep references to
AsyncPipelineRegistrationRequest and PipelineConfigResponse so readers can
locate the schema and adjust any adjacent wording to match the actual field
name.

Comment on lines +232 to +233
Expected payload: AsyncPipelineRegistrationRequest (pydantic schema) containing a
list of PipelineConfigResponse objects under the `pipelines_response.pipelines` key.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Fix docstring field name to match the schema.
The field is pipeline_response, not pipelines_response, so the current text is misleading.

📝 Proposed docstring fix
-        list of PipelineConfigResponse objects under the `pipelines_response.pipelines` key.
+        list of PipelineConfigResponse objects under the `pipeline_response.pipelines` key.
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
Expected payload: AsyncPipelineRegistrationRequest (pydantic schema) containing a
list of PipelineConfigResponse objects under the `pipelines_response.pipelines` key.
Expected payload: AsyncPipelineRegistrationRequest (pydantic schema) containing a
list of PipelineConfigResponse objects under the `pipeline_response.pipelines` key.
🤖 Prompt for AI Agents
In `@ami/main/api/views.py` around lines 232 - 233, Update the docstring in
ami/main/api/views.py that describes the AsyncPipelineRegistrationRequest
payload: change the field name from `pipelines_response` to the correct
`pipeline_response` so it reads that the list of PipelineConfigResponse objects
is under the `pipeline_response.pipelines` key; keep references to
AsyncPipelineRegistrationRequest and PipelineConfigResponse so readers can
locate the schema and adjust any adjacent wording to match the actual field
name.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Fix all issues with AI agents
In `@ami/main/api/views.py`:
- Around line 254-263: Replace the non-atomic filter().first() + create() flow
for ProcessingService with an atomic get_or_create using the
parsed.processing_service_name; call
ProcessingService.objects.get_or_create(name=parsed.processing_service_name,
defaults={...}) to obtain (processing_service, created) and then ensure the
project is associated (processing_service.projects.add(project)) regardless of
created, mirroring the get_or_create_default_processing_service() pattern to
avoid TOCTOU races.
♻️ Duplicate comments (1)
ami/main/api/views.py (1)

232-233: Docstring schema key is incorrect.

The request schema exposes pipelines directly, not pipelines_response.pipelines. Please update the docstring to match the actual payload structure.

📝 Proposed docstring fix
-        list of PipelineConfigResponse objects under the `pipelines_response.pipelines` key.
+        list of PipelineConfigResponse objects under the `pipelines` key.

Comment on lines +254 to +263
# Check if a processing service with the same name already exists,
processing_service = ProcessingService.objects.filter(name=parsed.processing_service_name).first()

if not processing_service:
# Create a processing service and associate it with the project
processing_service = ProcessingService.objects.create(
name=parsed.processing_service_name,
endpoint_url=None, # TODO: depends on https://github.com/RolnickLab/antenna/pull/1090
)
processing_service.projects.add(project)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Find ProcessingService model definition
fd 'processing_service.py' -o 'models.py' | head -20 | xargs rg -l "class ProcessingService" 2>/dev/null || find . -type f -name "*.py" | xargs rg -l "class ProcessingService" 2>/dev/null | head -5

Repository: RolnickLab/antenna

Length of output: 460


🏁 Script executed:

#!/bin/bash
# Search for ProcessingService model definition
rg "class ProcessingService" -A 30 --type py

Repository: RolnickLab/antenna

Length of output: 14391


🏁 Script executed:

cat -n ami/ml/models/processing_service.py

Repository: RolnickLab/antenna

Length of output: 13000


🏁 Script executed:

# Also check the actual view code at the specified lines
sed -n '240,280p' ami/main/api/views.py

Repository: RolnickLab/antenna

Length of output: 2220


Avoid TOCTOU race condition when creating ProcessingService.

filter().first() followed by create() is race-prone. Under concurrent requests, multiple ProcessingService instances with the same name can be created. Prefer get_or_create() to atomically check and create in a single database operation; the codebase already uses this pattern elsewhere (see get_or_create_default_processing_service()).

🛠️ Suggested fix
-        processing_service = ProcessingService.objects.filter(name=parsed.processing_service_name).first()
-
-        if not processing_service:
-            # Create a processing service and associate it with the project
-            processing_service = ProcessingService.objects.create(
-                name=parsed.processing_service_name,
-                endpoint_url=None,  # TODO: depends on https://github.com/RolnickLab/antenna/pull/1090
-            )
-            processing_service.projects.add(project)
-            logger.info(f"Created dummy processing service {processing_service} for project {project.pk}")
-        else:
+        processing_service, created = ProcessingService.objects.get_or_create(
+            name=parsed.processing_service_name,
+            defaults={"endpoint_url": None},  # TODO: depends on https://github.com/RolnickLab/antenna/pull/1090
+        )
+        if created:
+            processing_service.projects.add(project)
+            logger.info(f"Created dummy processing service {processing_service} for project {project.pk}")
+        else:
             # Associate with the project if not already associated
             if project not in processing_service.projects.all():
                 processing_service.projects.add(project)
                 logger.info(f"Associated processing service {processing_service} with project {project.pk}")
             else:
                 # Processing service already exists for this project
                 logger.warning(f"Processing service {processing_service} already exists for project {project.pk}")
                 return Response({"detail": "Processing service already exists."}, status=status.HTTP_400_BAD_REQUEST)
🤖 Prompt for AI Agents
In `@ami/main/api/views.py` around lines 254 - 263, Replace the non-atomic
filter().first() + create() flow for ProcessingService with an atomic
get_or_create using the parsed.processing_service_name; call
ProcessingService.objects.get_or_create(name=parsed.processing_service_name,
defaults={...}) to obtain (processing_service, created) and then ensure the
project is associated (processing_service.projects.add(project)) regardless of
created, mirroring the get_or_create_default_processing_service() pattern to
avoid TOCTOU races.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

PSv2: Pipeline registration for async processing services

2 participants