Currently I'm running out of quota because the agent is trying to re-run and re-test everything. I love this idea in the project. The re-runs make the code very robust.
However this is eating up context. Can we have an "tester" agent building out test automations which will validate each step. Then, instead of running the model again and again we just have agent trigger a test automation script.
Love this project tho!
(Not sure if this is the right place to place for suggestions, please point me to the correct place if not)