-
Notifications
You must be signed in to change notification settings - Fork 11
PSV2: endpoint to register pipelines #1076
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
✅ Deploy Preview for antenna-preview canceled.
|
|
Note Other AI code review bot(s) detectedCodeRabbit has detected other AI code review bot(s) in this pull request and will avoid duplicating their findings in the review comments. This may lead to a less comprehensive review. 📝 WalkthroughWalkthroughAdds a POST endpoint to register pipelines per project, makes ProcessingService.endpoint_url nullable for pull-mode services, updates schemas, tasks, migration and tests to support pipeline registration and null-endpoint behavior. Changes
Sequence Diagram(s)sequenceDiagram
participant Client as Client
participant ViewSet as ProjectViewSet
participant Auth as Permissions
participant Project as Project Model
participant Service as ProcessingService
participant DB as Database
Client->>ViewSet: POST /projects/{id}/pipelines (payload)
ViewSet->>Auth: check_custom_permission("pipelines")
Auth-->>ViewSet: allowed
ViewSet->>ViewSet: validate payload (AsyncPipelineRegistrationRequest)
ViewSet->>Project: fetch project by id
Project-->>ViewSet: project instance
ViewSet->>Service: get_or_create by name (create dummy if missing)
Service-->>ViewSet: service instance
ViewSet->>Service: create_pipelines(pipelines, project=project)
Service->>DB: persist pipeline configs / association
Service-->>ViewSet: PipelineRegistrationResponse
ViewSet-->>Client: return response (dict)
Estimated code review effort🎯 4 (Complex) | ⏱️ ~50 minutes Suggested labels
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 5✅ Passed checks (5 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
👷 Deploy request for antenna-ssec pending review.Visit the deploys page to approve it
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
🤖 Fix all issues with AI agents
In `@ami/main/api/views.py`:
- Around line 240-262: The current logic in the view returns a 400 when a
ProcessingService (found via
ProcessingService.objects.filter(name=parsed.processing_service_name).first())
is already associated with the project, which prevents idempotent
re-registration; update the branch handling the existing processing_service so
that if project is already in processing_service.projects.all() you do not
return Response(status=400) but simply continue (no-op) and allow the endpoint
to proceed (e.g., log an info/debug message) — ensure you keep the
processing_service.projects.add(project)/save() when association is missing, and
remove the early return that blocks subsequent pipeline registration.
In `@ami/ml/schemas.py`:
- Around line 326-327: The endpoint_url annotation is optional but currently
still required because it has no default; update the schema so endpoint_url has
an explicit default (e.g., set endpoint_url to None or use Field(default=None)
if this is a Pydantic model) so parsing succeeds when senders omit it—modify the
endpoint_url declaration (near the latency: float field) to include the default
None.
In `@ami/ml/tasks.py`:
- Around line 112-114: The current check only skips when service.endpoint_url is
None but still allows empty strings; update the conditional around
service.endpoint_url in the loop (the block that calls logger.warning and
continue) to treat None, empty, or whitespace-only values as missing — e.g.,
ensure you test both falsy and stripped emptiness (safe-check to avoid calling
.strip() on None) so that logger.warning(f"Processing service {service} has no
endpoint URL, skipping.") is used and the loop continues for None/"", or
whitespace-only endpoint_url.
🧹 Nitpick comments (1)
ami/main/tests.py (1)
3457-3459: Consider disabling default fixtures for these API tests.
create_defaults=Truebuilds extra related objects these tests don’t use; turning it off keeps setup lean.♻️ Suggested change
- self.project = Project.objects.create(name="Test Project", owner=self.user, create_defaults=True) - self.other_project = Project.objects.create(name="Other Project", owner=self.other_user, create_defaults=True) + self.project = Project.objects.create(name="Test Project", owner=self.user, create_defaults=False) + self.other_project = Project.objects.create(name="Other Project", owner=self.other_user, create_defaults=False)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
This pull request introduces a new API endpoint for V2 ML processing services to register pipelines with projects in "pull mode" (without requiring an endpoint URL). The changes enable processing services to push pipeline configurations to Antenna rather than Antenna pulling them from a service endpoint.
Changes:
- Added a new POST endpoint
/api/v2/projects/{id}/pipelines/for pipeline registration - Made ProcessingService.endpoint_url nullable to support pull-mode services
- Updated ProcessingService methods to handle null endpoint URLs gracefully
Reviewed changes
Copilot reviewed 9 out of 9 changed files in this pull request and generated 8 comments.
Show a summary per file
| File | Description |
|---|---|
| ami/main/api/views.py | Added new pipelines action to ProjectViewSet for receiving pipeline registrations from V2 services |
| ami/main/models.py | Added permission check for "pipelines" action requiring ProjectManager role |
| ami/ml/models/processing_service.py | Made endpoint_url nullable and updated get_status() and get_pipeline_configs() to handle null values |
| ami/ml/schemas.py | Made endpoint_url nullable in ProcessingServiceStatusResponse and added AsyncPipelineRegistrationRequest schema |
| ami/ml/migrations/0026_make_processing_service_endpoint_url_nullable.py | Database migration to make endpoint_url nullable |
| ami/ml/tasks.py | Added check to skip processing services without endpoint URLs in periodic status check |
| ami/ml/tests.py | Added comprehensive tests for null endpoint_url handling |
| ami/main/tests.py | Added test suite for the new pipelines API endpoint |
| requirements/base.txt | Added explanatory comment about psycopg binary vs non-binary versions |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In `@ami/ml/schemas.py`:
- Around line 339-345: Update the inaccurate docstring on
AsyncPipelineRegistrationRequest to reference AsyncPipelineRegistrationRequest
(not PipelineRegistrationResponse) and clearly state it represents the async
callback payload containing processing results; rename the output-oriented model
PipelineRegistrationResponse to a more generic name like
PipelineProcessingResult (or create a new PipelineProcessingResult model) and
adjust its fields (timestamp, success, error, pipelines_created,
algorithms_created) to be output/result-oriented (optional or excluded from
required input validation) so the
AsyncPipelineRegistrationRequest.pipeline_response type reflects a result object
rather than required caller-supplied output fields; update all references to
PipelineRegistrationResponse to the new name (or to the new model) including the
AsyncPipelineRegistrationRequest definition and related validation/serialization
logic.
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In `@ami/main/api/views.py`:
- Around line 232-233: Update the docstring in ami/main/api/views.py that
describes the AsyncPipelineRegistrationRequest payload: change the field name
from `pipelines_response` to the correct `pipeline_response` so it reads that
the list of PipelineConfigResponse objects is under the
`pipeline_response.pipelines` key; keep references to
AsyncPipelineRegistrationRequest and PipelineConfigResponse so readers can
locate the schema and adjust any adjacent wording to match the actual field
name.
| Expected payload: AsyncPipelineRegistrationRequest (pydantic schema) containing a | ||
| list of PipelineConfigResponse objects under the `pipelines_response.pipelines` key. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix docstring field name to match the schema.
The field is pipeline_response, not pipelines_response, so the current text is misleading.
📝 Proposed docstring fix
- list of PipelineConfigResponse objects under the `pipelines_response.pipelines` key.
+ list of PipelineConfigResponse objects under the `pipeline_response.pipelines` key.📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| Expected payload: AsyncPipelineRegistrationRequest (pydantic schema) containing a | |
| list of PipelineConfigResponse objects under the `pipelines_response.pipelines` key. | |
| Expected payload: AsyncPipelineRegistrationRequest (pydantic schema) containing a | |
| list of PipelineConfigResponse objects under the `pipeline_response.pipelines` key. |
🤖 Prompt for AI Agents
In `@ami/main/api/views.py` around lines 232 - 233, Update the docstring in
ami/main/api/views.py that describes the AsyncPipelineRegistrationRequest
payload: change the field name from `pipelines_response` to the correct
`pipeline_response` so it reads that the list of PipelineConfigResponse objects
is under the `pipeline_response.pipelines` key; keep references to
AsyncPipelineRegistrationRequest and PipelineConfigResponse so readers can
locate the schema and adjust any adjacent wording to match the actual field
name.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In `@ami/main/api/views.py`:
- Around line 254-263: Replace the non-atomic filter().first() + create() flow
for ProcessingService with an atomic get_or_create using the
parsed.processing_service_name; call
ProcessingService.objects.get_or_create(name=parsed.processing_service_name,
defaults={...}) to obtain (processing_service, created) and then ensure the
project is associated (processing_service.projects.add(project)) regardless of
created, mirroring the get_or_create_default_processing_service() pattern to
avoid TOCTOU races.
♻️ Duplicate comments (1)
ami/main/api/views.py (1)
232-233: Docstring schema key is incorrect.The request schema exposes
pipelinesdirectly, notpipelines_response.pipelines. Please update the docstring to match the actual payload structure.📝 Proposed docstring fix
- list of PipelineConfigResponse objects under the `pipelines_response.pipelines` key. + list of PipelineConfigResponse objects under the `pipelines` key.
| # Check if a processing service with the same name already exists, | ||
| processing_service = ProcessingService.objects.filter(name=parsed.processing_service_name).first() | ||
|
|
||
| if not processing_service: | ||
| # Create a processing service and associate it with the project | ||
| processing_service = ProcessingService.objects.create( | ||
| name=parsed.processing_service_name, | ||
| endpoint_url=None, # TODO: depends on https://github.com/RolnickLab/antenna/pull/1090 | ||
| ) | ||
| processing_service.projects.add(project) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Find ProcessingService model definition
fd 'processing_service.py' -o 'models.py' | head -20 | xargs rg -l "class ProcessingService" 2>/dev/null || find . -type f -name "*.py" | xargs rg -l "class ProcessingService" 2>/dev/null | head -5Repository: RolnickLab/antenna
Length of output: 460
🏁 Script executed:
#!/bin/bash
# Search for ProcessingService model definition
rg "class ProcessingService" -A 30 --type pyRepository: RolnickLab/antenna
Length of output: 14391
🏁 Script executed:
cat -n ami/ml/models/processing_service.pyRepository: RolnickLab/antenna
Length of output: 13000
🏁 Script executed:
# Also check the actual view code at the specified lines
sed -n '240,280p' ami/main/api/views.pyRepository: RolnickLab/antenna
Length of output: 2220
Avoid TOCTOU race condition when creating ProcessingService.
filter().first() followed by create() is race-prone. Under concurrent requests, multiple ProcessingService instances with the same name can be created. Prefer get_or_create() to atomically check and create in a single database operation; the codebase already uses this pattern elsewhere (see get_or_create_default_processing_service()).
🛠️ Suggested fix
- processing_service = ProcessingService.objects.filter(name=parsed.processing_service_name).first()
-
- if not processing_service:
- # Create a processing service and associate it with the project
- processing_service = ProcessingService.objects.create(
- name=parsed.processing_service_name,
- endpoint_url=None, # TODO: depends on https://github.com/RolnickLab/antenna/pull/1090
- )
- processing_service.projects.add(project)
- logger.info(f"Created dummy processing service {processing_service} for project {project.pk}")
- else:
+ processing_service, created = ProcessingService.objects.get_or_create(
+ name=parsed.processing_service_name,
+ defaults={"endpoint_url": None}, # TODO: depends on https://github.com/RolnickLab/antenna/pull/1090
+ )
+ if created:
+ processing_service.projects.add(project)
+ logger.info(f"Created dummy processing service {processing_service} for project {project.pk}")
+ else:
# Associate with the project if not already associated
if project not in processing_service.projects.all():
processing_service.projects.add(project)
logger.info(f"Associated processing service {processing_service} with project {project.pk}")
else:
# Processing service already exists for this project
logger.warning(f"Processing service {processing_service} already exists for project {project.pk}")
return Response({"detail": "Processing service already exists."}, status=status.HTTP_400_BAD_REQUEST)🤖 Prompt for AI Agents
In `@ami/main/api/views.py` around lines 254 - 263, Replace the non-atomic
filter().first() + create() flow for ProcessingService with an atomic
get_or_create using the parsed.processing_service_name; call
ProcessingService.objects.get_or_create(name=parsed.processing_service_name,
defaults={...}) to obtain (processing_service, created) and then ensure the
project is associated (processing_service.projects.add(project)) regardless of
created, mirroring the get_or_create_default_processing_service() pattern to
avoid TOCTOU races.
Summary
This pull request introduces a new API endpoint to support the registration of ML pipelines for a project, primarily for integration with V2 ML processing services. It also changes the
ProcessingService.endpoint_urlto be nullable.New API endpoint for pipeline registration:
pipelinesPOST action to the project viewset inami/main/api/views.py, allowing V2 ML processing services to register available pipelines for a project. The endpoint parses the payload using the AsyncPipelineRegistrationRequest.Closes #1086
Checklist
Summary by CodeRabbit
New Features
Tests
Chores
✏️ Tip: You can customize this high-level summary in your review settings.