Conversation
…ardless eval loss, because of the correctness of the eval loss calculation is questionable.
…hs without improvement.
README.md
Outdated
| # for the model with size 512: | ||
| --max_steps 150000 | ||
| ``` | ||
|
|
There was a problem hiding this comment.
Should work by default, not with options.
g2p_seq2seq/g2p.py
Outdated
| num_iter_cover_train = int(sum(train_bucket_sizes) / | ||
| self.params.batch_size / | ||
| self.params.steps_per_checkpoint) | ||
| current_step, iter_inx, num_epochs_last_impr, max_num_epochs,\ |
There was a problem hiding this comment.
"inx" is unclear abbreviation.
g2p_seq2seq/g2p.py
Outdated
| self.params.batch_size / | ||
| self.params.steps_per_checkpoint) | ||
| current_step, iter_inx, num_epochs_last_impr, max_num_epochs,\ | ||
| num_up_trends, num_down_trends = 0, 0, 0, 2, 0, 0 |
There was a problem hiding this comment.
"trend" is not a proper word here.
g2p_seq2seq/g2p.py
Outdated
| current_step, iter_inx, num_epochs_last_impr, max_num_epochs,\ | ||
| num_up_trends, num_down_trends = 0, 0, 0, 2, 0, 0 | ||
| prev_train_losses, prev_valid_losses, prev_epoch_valid_losses = [], [], [] | ||
| num_iter_cover_train = max(1, int(sum(train_bucket_sizes) / |
There was a problem hiding this comment.
This is called "epoch", no? "cover" is not a good word in this context.
g2p_seq2seq/g2p.py
Outdated
| if len(prev_epoch_valid_losses) > 0: | ||
| print('Previous min epoch eval loss: %f, current epoch eval loss: %f' % | ||
| (min(prev_epoch_valid_losses), epoch_eval_loss)) | ||
| # Check if there was improvement during last epoch |
g2p_seq2seq/g2p.py
Outdated
| # Check if there was improvement during last epoch | ||
| if (epoch_eval_loss < min(prev_epoch_valid_losses)): | ||
| if num_epochs_last_impr > max_num_epochs/1.5: | ||
| max_num_epochs = int(1.5 * num_epochs_last_impr) |
There was a problem hiding this comment.
1.5 must be separated into something standalone and properly named, not used in multiple places in the code without name.
g2p_seq2seq/g2p.py
Outdated
|
|
||
|
|
||
| def __calc_epoch_loss(self, epoch_losses): | ||
| """Calculate average loss during the epoch. |
There was a problem hiding this comment.
Comment is wrong, this is not really an average.
g2p_seq2seq/g2p.py
Outdated
| prev_train_losses.append(train_loss) | ||
| prev_valid_losses.append(eval_loss) | ||
| step_time, train_loss = 0.0, 0.0 | ||
| iter_idx += 1 |
There was a problem hiding this comment.
Use current step instead of iter_idx
g2p_seq2seq/g2p.py
Outdated
| num_epochs_last_impr = 0 | ||
| else: | ||
| print('No improvement during last epoch.') | ||
| num_epochs_last_impr += 1 |
There was a problem hiding this comment.
epochs_without_improvement
g2p_seq2seq/g2p.py
Outdated
| num_iter_total += num_iter_cover_valid | ||
| for batch_id in xrange(num_iter_cover_valid): | ||
| iter_total += iter_per_valid | ||
| for batch_id in xrange(iter_per_valid): |
There was a problem hiding this comment.
Use xrange with batch_size step.
g2p_seq2seq/g2p.py
Outdated
| self.params.batch_size)) | ||
| num_iter_total += num_iter_cover_valid | ||
| for batch_id in xrange(num_iter_cover_valid): | ||
| iter_total += iter_per_valid |
There was a problem hiding this comment.
Count iter_total in inner loop
Add new stop criteria. Previously number of the allowable number of epochs without improvement was fixed. Training stopped if no improvement was seen during this window. New stop criteria include possibility of increasing of that window depending on the number of epochs needed for previous improvements during training.