My current issue is not that epochs are skipped at all; it’s more how it goes about it. The current system saves a maximum of 20 epochs and will do so evenly distributed. Right now, that just seems to mean delete every 2nd epoch.
This doesn’t seem too bad until you go like maybe just 1 over the 20 epoch limit.
In this scenario, you set it to 21 epochs maybe to just have some extra steps during the training process.
The training goes on and skips the 2nd epoch. So now that’s one gone and the other 19 will show up as 1st + the other 19 = 20, but then it skips the 4th and 6th, and 8th and 10th, and 12th, and starts to get scarily closer to a more complete epoch. But for what reason? Why are you going so far when there is only 1 over the 20 maximum limit?
The system should be set up so that it just skips for the number of epochs over 20. So if you set up to 22 epochs, the 2nd and 4th are skipped, and the other 18 are shown. This feels much more sensible than just going Hail Mary across the whole set of epochs.
Or just delete the first epochs as the number of epochs starts to go over twenty. If you set the epochs to 25, once it starts training 21 delete the 1st epoch and so on.
Maybe even make both of them a toggle. But the current system just feels bad.
Please authenticate to join the conversation.
Awaiting Dev Review
💡 Feature Request
8 months ago

Michindus
Get notified by email when there are changes.
Awaiting Dev Review
💡 Feature Request
8 months ago

Michindus
Get notified by email when there are changes.