Skip to content

Conversation

@SilvioC2C
Copy link

Sometimes the backend is badly configured, with invalid URL. Or there might be other conditions.
It happens more frequently on Dev or Test server instances.

In such case we don't want to insist sending Exchange Record indefinitely.

Supersedes #960

@OCA-git-bot
Copy link
Contributor

Hi @etobella, @simahawk,
some modules you are maintaining are being modified, check this out!

job.perform()
with freeze_time("2024-01-11 08:50:00"):
# + 23 hours and 50 minutes
job = self.backend.with_delay().exchange_send(record)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why not using the same job you've got before?
However, if the identity_key works well, here you should get still the same job but I think is useless to call send again.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@SilvioC2C ping

job = self.backend.with_delay().exchange_send(record)
res = job.perform()
self.assertIn("Error", res)
job_counter.search_created()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this search seems useless as you don't use the result

raise RetryableJobError(
error, **exchange_record._job_retry_params()
) from err
if self._send_should_retry(exchange_record):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thinking out loud...
In theory a case where we spawn another job for the same record should not exist thanks to the identity_key.
Hence, I assume that jobs won't be retried infinitely if we reach the max retry job count.
If this does not happen it means the identity_key is not working and we end up w/ more than one job to send the same record.
Am I wrong?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@SilvioC2C ping

@github-actions
Copy link

There hasn't been any activity on this pull request in the past 4 months, so it has been marked as stale and it will be closed automatically if no further activity occurs in the next 30 days.
If you want this PR to never become stale, please ask a PSC member to apply the "no stale" label.

@github-actions github-actions bot added the stale PR/Issue without recent activity, it'll be soon closed automatically. label Oct 13, 2024
@github-actions github-actions bot closed this Nov 17, 2024
@gurneyalex gurneyalex added no stale Use this label to prevent the automated stale action from closing this PR/Issue. and removed stale PR/Issue without recent activity, it'll be soon closed automatically. labels Nov 29, 2024
@gurneyalex gurneyalex reopened this Nov 29, 2024
@OCA-git-bot
Copy link
Contributor

Hi @etobella, @simahawk,
some modules you are maintaining are being modified, check this out!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

no stale Use this label to prevent the automated stale action from closing this PR/Issue.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants