I ran the experimenter for 10 replications of a scenario and then looked at the reported performance data. One of the ten replications reported a value for a performance measure that was well outside of an expected value. I decided to rerun the experimenter to see if the behavior showed up again when running 10 more replications. Well it did show up again but it was in the exact same replication. Repeating this one more time gave me the same result again. When we "Reset Experiment" and then re-run, is the experimenter supposed to produce the exact same replication results as last time if we haven't changed anything? Is there a way to change this so it doesn't repeat and instead generates new data?