question

shanice.c avatar image
0 Likes"
shanice.c asked Ryan Clark commented

How to make sure error of different replication results are small enough?

I've gone through the manual, the Experimenter is a tool that enables you to run the same simulation model multiple times, changing one or more parameters each time to see its effect on the performance measures.

But I'd like to know what if I don't need to change any variables(only 1 scenario), I simply want to run a model several times(ex: 10 replications) and then want to know the error between these 10 replications to make sure the error is small enough to prove the simulation result is strong enough to reflect the real world system? I'm having the question because generally or academically, we couldn't use the result of 1 replication to conclude to simulation result right? So I would like to know what tools or modules I could use to do I've said above in FlexSim. Thank you.

FlexSim 21.2.0
flexsim experimenter
· 1
5 |100000

Up to 12 attachments (including images) can be used with a maximum of 23.8 MiB each and 47.7 MiB total.

Ryan Clark avatar image Ryan Clark commented ·

Hi @Fiona C, was one of Joerg Vogel's or David Seo's answers helpful? If so, please click the "Accept" button at the bottom of the one that best answers your question. Or if you still have questions, add a comment and we'll continue the conversation.

If we haven't heard back from you within 3 business days we'll auto-accept an answer, but you can always unaccept and comment back to reopen your question.

0 Likes 0 ·
David Seo avatar image
1 Like"
David Seo answered

@Fiona C

What do you mean exactly about 'error' during run FlexSim (simulation in general)?

For example, the result of each simulation is different from each or the tiny error during running like warning message? The model suould not have any error message in the model logic.

Your 'result error' means 'distribution of the result value' after running?

If so, it means 'distribution of the result value'.

If your current model result is exactly same with each running, make 'repeat random streams' uncheck. The option is located in 'Statistics>Repeat Random Streams'.

2021-10-04-094943.jpg

The option makes the result value distributed.



· 3
5 |100000

Up to 12 attachments (including images) can be used with a maximum of 23.8 MiB each and 47.7 MiB total.

shanice.c avatar image shanice.c commented ·

@David Seo If all processing time and flow item arrival time are constant, even if unchecked the"repeat random streams", the 10 replications result would have no difference?

0 Likes 0 ·
David Seo avatar image David Seo shanice.c commented ·
@Fiona C Yes. If you set all processing time and arrival time to be constant, the results of the repeated replications are same. No difference. If the input value is contant and then the result is constant not deviation.
0 Likes 0 ·
shanice.c avatar image shanice.c David Seo commented ·
@David Seo Thanks for the reply. I'll keep this in mind.
0 Likes 0 ·
Joerg Vogel avatar image
0 Likes"
Joerg Vogel answered
Your thoughts are correct. And you can do it exactly as you describe it. You let run your model several times in the experimenter.
· 3
5 |100000

Up to 12 attachments (including images) can be used with a maximum of 23.8 MiB each and 47.7 MiB total.

shanice.c avatar image shanice.c commented ·

Hi,@Joerg Vogel I think what I want is the KPI like picture below(with average, standard deviation, Min, Max).

1633332095412.png

In order to gain this, is it necessary to use statistics collector before, so that I can use calculated table to receive KPI I want, and then I could use performance measurement, finally to get so socalled "data summary"?

I understand maybe these tools provide flexibility for users to collect whatever they want, but I'm a kind of unaware of where to start from. After reading the manual I feel like confused about using these tools. Is there before&after relationship using these tools? Actually I would like to collect very simple KPIs, such as throughput, Queue waiting time, Processor Utilization, AGV Utilization. Would you give me some more example of how use these tools?

0 Likes 0 ·
1633332095412.png (26.9 KiB)
Joerg Vogel avatar image Joerg Vogel shanice.c commented ·
What do you want to achieve depends on the distribution sequence of input and the process duration of your model. If you create all items at model run start the efficiency is poor. If you create only items if the level of availability of your processes is good the efficiency rises. If you want to compare different option setting you need to control parameters in different scenarios in the experimenter.

Maybe you want to evaluate your model after a warmup has been finished and all involved stations have something to do.
Perhaps you can work with the tutorials to gather data on a small and easy model of your own first. Then if you have problems with your model, you come here back and tell us about them and share your testing model with us.

0 Likes 0 ·
shanice.c avatar image shanice.c Joerg Vogel commented ·
@Joerg Vogel Thanks for explanation. I'll go through tutorials.
0 Likes 0 ·