question

Ben Wilson avatar image
1 Like"
Ben Wilson asked Jason Lightfoot commented

Speed issues in a large, complex model

We are modeling a high-throughput widget factory. We would like to simulate one week of production, and have the simulation complete in a few hours or faster, but we've found that our large quantities of widgets significantly reduce the speed of our FlexSim model.

We have already gone through several rounds of trying to optimize the model, including incorporating all the suggestions found on the old and new FlexSim forum. For our latest attempt we converted all our logic to Process Flow so we can completely uncouple the Perspective View layer. We can essentially turn off all the flow items and have the model run only in Process Flow. Unfortunately the model is still very slow. We fear this may be a model that FlexSim won't be able to handle.

We are considering reducing the number of items with a factor, so that one token represents 10+ widgets. This may help, but could impact the model's accuracy.

So before we start reducing the model, is there anything else we can consider that would significantly increase the processing speed of the model? We submitted our model separately in a private question. The Process Flow is what runs the whole thing, and the global variable ShowModelVisuals, toggles the flow item generation. Keep in mind that there is supposed to be an additional production line in the model that is a copy of the current one, and a packing area after those. With those additional areas we are potentially facing a doubling of the item count that is already killing our model.

Furthermore, is seems that FlexSim is not utilizing the computer's processing power very well. We are aware that the software can only use one thread, but is also seems that its not fully using that thread either. CPU utilization never exceeds 18%.

FlexSim 16.2.0
process flowmodel speedmodel optimizationlarge modelmassive models
5 |100000

Up to 12 attachments (including images) can be used with a maximum of 23.8 MiB each and 47.7 MiB total.

1 Answer

·
Phil BoBo avatar image
4 Likes"
Phil BoBo answered Jason Lightfoot commented

The problem is not necessarily in the number of items you have in your simulation. The problem is in the size of the event list at any given time.

For example, when this model gets to steady-state, you have roughly 42,000 events on the event list.

Testing This In Your Model

First, I opened the Statistics view of your Process Flow's Sink activity so that I could see how many widgets get completely processed.

Then, when you run the model, you can see this value slowly increase as the model runs.

Now, you can compare how many widgets per real second are being processed, rather than comparing how many simulation minutes are being processed per real second.

For example, on my computer with your model, the Sink's input after 30 seconds is 2,848. So 2,848 widgets got processed in 30 seconds.

Now adjust your inter-arrival time in your ModelParameters table from 0.3 to 300. Run the model at max speed and see how many widgets get processed in 30 seconds. On my computer, it processed 127,739 widgets in 30 seconds.

Unless I'm misunderstanding how your model works, the only thing that is changing between these two runs is the amount of widgets in the system simultaneously. Thus changing how many events are queued on the event list at a given time. With the inter-arrival time at 300, you have roughly 45 events on the event list at steady-state.

With less events on the event list at once, you can process about 250,000 widgets per real minute vs 6,000 widgets per real minute.

The Answer

So the answer to your question is that you need to adjust how your model is configured so that you have less pending events on the event list at any given time.

Here's a general example of how you could think about solving this problem:

Imagine you have a conveyor with 50,000 items on it at a time, with 1 item coming onto the conveyor and 1 item leaving the conveyor every 1 second.

If you model each item individually entering the conveyor and creating an event to leave the conveyor after the amount of time that it takes to convey it across the conveyor, then you will have 50,000 events pending for those items.

If instead, you kept a label that shows how many items are on the conveyor, and you created 1 event for the first item to leave, then in that event, created the next event for the next item leaving in 1 second, and so on, then you would only have 1 event on the event list at a time that represents when the next item will leave that conveyor.

(This example is actually quite similar to how the Conveyor module conveyor object works; it minimizes the number of events that are pending on the event list. You can see that in the attached model (giant-conveyor.fsm), which shows a conveyor with 13,822 items on it at once, but only 2 pending events on the event list at a time. The number of flow items going through the system per real minute is about 1,800,000 on my computer. FlexSim can handle simulating this system; you just need to adjust how you think about the problem so that you can model it efficiently.)

How To Fix This

So rather than having a token that represents each widget, that then has a pending event for each widget in your simulation, you should think about what problems you are actually trying to solve by running this simulation. Depending on what questions you are trying to answer, you should restructure your simulation's logic so that it is based on the actual process on the system and not necessarily based on processing each widget as a unique entity that has a pending event on the event list at all times. This is especially true where you have constant values and rates rather than stochastic values.

CPU Utilization

While FlexSim is running the model, the Windows task manager will show its usage at approximately 1 core of your processor. If you have a 1-core processor, it will show about 99%. If you have more cores, and if Hyper-Threading is enabled, it will show a lower percent.


giant-conveyor.fsm (16.9 KiB)
· 11
5 |100000

Up to 12 attachments (including images) can be used with a maximum of 23.8 MiB each and 47.7 MiB total.

Brandon Peterson avatar image Brandon Peterson ♦ commented ·

To view the event list go to the main menu option "Debug -> Event List"

0 Likes 0 ·
Phil BoBo avatar image Phil BoBo ♦♦ Brandon Peterson ♦ commented ·

If the event list view is open, the view itself will be sorting the event list each time it redraws, which will be slow.

You can use this code to see how many events are pending:

content(node("MAIN:/1/1/events"))
1 Like 1 ·
Phil BoBo avatar image Phil BoBo ♦♦ commented ·

Attached is a sample model that implements a different kind of delay, emulating the way the conveyors work.

conveyordelaytest-runsubflow-1.fsm

Rather than using a Create Tokens activity, it uses a Run Sub Flow activity without a Finish activity. That way, it doesn't release the tokens automatically, but rather waits for you to release them explicitly with releasetoken().

Then, the releasetoken() can release it out connector 1 rather than an activity reference on a label so that you can connect and use the Low Event Delay activity just like any other activity, without breaks in your process flow.

I set up the DelayFlow activities so that the first token doesn't get reused because when you do that, the parent token keeps its link to that token, and it doesn't get destroyed properly when you send it to a sink (Run Sub Flow doesn't create independent tokens). Instead, it creates a new token that continues looping through, and kills the token that is tied to the parent token that got released. This messed up the "current" reference though, so I stored it on a label at the beginning and access it with that label instead of current.

I hope that makes sense.

This model also demonstrates using a Model Documentation widget on a Dashboard to show the number of pending events on the event list using the code I wrote in my previous comment above.

0 Likes 0 ·
martin.j avatar image martin.j commented ·

I think this is something that we will have to keep in mind in the future when designing all our models. It's not so much the number of items in the model that is a problem, but the number of events that they generate. If we keep this in mind we can perhaps find efficient ways to transport objects and information through the model with as few events as possible.

If I had such skills it would probably be cool to create this "FIFO Delay Activity" as a stand alone activity. I might be a good first foray into the making of my own Flexsim objects in C++. For now the subflow from this thread will have to do.

0 Likes 0 ·
Phil BoBo avatar image Phil BoBo ♦♦ martin.j commented ·

We've added a case to the dev list to look into optimizing the ProcessFlow activities to minimize the number of events that they create. That way, the regular delay activity could handle this optimization, and the original way you designed the process flow would be much faster.

We've done that with the regular FlexSim objects and the Conveyor and AGV modules, but not the ProcessFlow activities yet.

1 Like 1 ·
Phil BoBo avatar image Phil BoBo ♦♦ Phil BoBo ♦♦ commented ·

After discussing this internally some more, we did some more tests. The slowdown is because of the number of events, but not because of finding the next event to execute. Finding the next event to execute is actually a really fast process, even with a large event list. The actual slowdown is happening when tokens get destroyed; they are traversing the event list to destroy events that involve that token. We have updated our dev case with this new information so that we can add performance optimizations to make this much faster in a future version. For now, the best method is to indeed limit the size of the event list (and that's always a good practice), but in a coming release, we will make it so that this modeling situation will be a lot faster despite the large event list.

1 Like 1 ·
Joerg Vogel avatar image Joerg Vogel commented ·

@phil.bobo This answer contains so much information that I think it shouldn't be in the help space. It is more an article or like an item of the manual or a good practice. Can you move it in another space?

0 Likes 0 ·
Pedro_Veiga avatar image Pedro_Veiga commented ·
Hi @Phil BoBo ,


About what you said of CPU utilization, there is any possibility of utilizing more cores from the computer processor? I also have a complex model to run for a long period of time and I'd like to do it faster.

0 Likes 0 ·
Jason Lightfoot avatar image Jason Lightfoot ♦ Pedro_Veiga commented ·
You can take advantage of multiple cores when running experiments with each replication using its own core.

Have you profiled your model for performance to see where it might be improved?

1 Like 1 ·
Pedro_Veiga avatar image Pedro_Veiga Jason Lightfoot ♦ commented ·
Hi Jason, thanks for your instructions, I've been reading this article that explains to me how to speed up the simulation of a single scenario model.
0 Likes 0 ·
Show more comments

Write an Answer

Hint: Notify or tag a user in this post by typing @username.

Up to 12 attachments (including images) can be used with a maximum of 23.8 MiB each and 47.7 MiB total.