1/ Few days ago I posted an article in which I showed few common misconceptions about Avalanche.
(Open Token, obviously, failed to find the safety failure issue, and only verified that the implementation follows the paper).
The particular command line that runs the safety-breaking adversarial simulation is
python run.py --adversary_percent=0.17 --adversary_strategy BREAK_SAFETY experiment
If count reset when a sample doesn't return 80% for any color, naturally (13) above doesn't work, and the safety with this particular approach is not violated.