question

Kathryn McNeal avatar image
1 Like"
Kathryn McNeal asked Ben Wilson commented

Creating many entities at time 0

I have a simulation model that builds widgets, which are composed of many sub-assemblies. We have varying numbers of each sub-assembly in stock at the beginning of the year, when the model is started, and more of these sub-assemblies are built as the model runs. With approximately 27000 sub-assemblies at the start of the model, spread over 100 different types of sub-assemblies, the model takes about 30 seconds before it will even start (solely due to this initial quantity source) because of how much is going on at time 0. What is a more efficient way to handle this? The entities are all created in a single source and are assigned labels with the item name and part number. The entities are sent to queues where they wait until a combiner pulls them in.

source scheduleslow run time
· 2
5 |100000

Up to 12 attachments (including images) can be used with a maximum of 23.8 MiB each and 47.7 MiB total.

Fernanda Becker avatar image Fernanda Becker commented ·

Hi Kathryn,

Could you maybe send your model here? I'm simulating a warehouse and I also need to create many entities at time 0 (200 different item types, totalizing 5.600 units), but I don't know how to create them even with the arrival-schedule style. I think your strategy can help me.

Thanks a lot!

0 Likes 0 ·
Kathryn McNeal avatar image Kathryn McNeal Fernanda Becker commented ·

I can't share my full model, but I've attached the one I used to test it out. I still need to make sure everything is working as it should be, but right now it seems to be cutting my experimenter time in half. Source2 creates 10 entities at the start. Source 8 creates an entity every 10 minutes. If there are > 5 items in queue 3, the value in the GTStock global table is incremented, otherwise an item is created in Queue3 (process flow). Every time an entity leaves queue3, if there are now < 5 items in the queue and there are > 0 items available in the GTStock global table, then an entity is created in queue3 (via the process flow) and the value in the global table is decremented.

As far as implementing it with many different types of parts, each item has a label that says PartNum. All of the items go through a prestock queue and there are separate queues in the warehouse for each part type. When new parts are created using the process flow, they all go to a single queue and are then routed depending on their part number. The following screen shots should help give you an idea of where to start.

This is part of the flow code for the prestock queue. Port 98 leads to a sink for when the parts are sent to the global table stock.

I assigned all of the storage queues to a group, and used the on exit for the group to trigger this source, otherwise it's mostly the same as in the attached model

On model reset, you need to re-initialize your stock levels (in my code, columns 2 and 3 start out the same, but only column 2 is ever changed). This is in the reset trigger.

0 Likes 0 ·
bliyb.png (16.0 KiB)
uixms.png (14.4 KiB)
lk0ur.png (3.7 KiB)

1 Answer

Matt Long avatar image
3 Likes"
Matt Long answered Ben Wilson commented

If you're creating 27,000 flowitems and then setting labels on each one, there isn't a whole lot that can be done to improve performance except to buy a faster computer. You may want to think about changing your model so you don't have to create so many flowitems at the same time, or ever.

I'm not sure what kind of system you're building, but one example would be a warehouse picking model. When you're dealing with parts picking, especially small parts, a warehouse may have thousands or hundreds of thousands of different parts. Creating a flowitem for each of these is unreasonable. Instead, all of the data for all of the parts would be stored in a global table and then flowitems are created for each part 'on demand'. This way, you only have a handful of flowitems in the model at any given time.

You may be able to make each flowitem represent multiple objects.

If you don't need a 3D oject for each of the 27,000 sub-assemblies, a token in Process Flow is much smaller with a lot less overhead than a flowitem. Creating 27,000 tokens is almost instantaneous.

Hopefully one of these options helps or gives you an idea of how to improve your model's performance.

· 1
5 |100000

Up to 12 attachments (including images) can be used with a maximum of 23.8 MiB each and 47.7 MiB total.

Kathryn McNeal avatar image Kathryn McNeal commented ·

Thanks for the ideas. I'm going to try keeping track of the total inventory of each item in a global table, and only keeping as many physical entities in the model as needed at a time. Then I'm using process flow to create new entities as needed and decrement the total inventory in the global tables. This seems to at least work well on a small scale so far.

0 Likes 0 ·