I am working on a large model that includes over 1,200 SKUs.
The initial inventory for the simulation is provided for each of the SKUs, which are divided into two main categories: hanging clothes and boxed clothes. Boxed items are stored on pallets, each containing 4 to 6 boxes, while hanging items are stored directly on racks.
The racks for hanging are supposed to be 15 racks, 17 bays each, 3 levels, 280 slots, and 2 items per slot (supposed to be 2 tubes for hanging). The initial inventory for the hanging products consists of a total of 388,481 individual items.
I designed a model that generates the items, looks for a slot in the storage system, and then pushes the item's token into a list for later pulling.
The problem i encountered is that, in the hanging item generation, the model seems to slow down drastically. After the first couple of thousands of tokens the rate in which they are created and stored lowers to 50 tokens per second, approximately. At this rate, it would take about 2 hours to just begin with the simulation.
Is there a way to optimize this process, or to utilize the full capabilities of the computer?
Also:
This was tested in a capable computer, with 32gb of ram, a 3.6ghz processor, and a powerful GPU. Only a third of the available RAM was being used and neither the CPU or GPU were being fully utilized.
Unfortunately, i cannot share my model, since this is for a competition organized by Factible, and other competitors might find our question, since consulting in the forum is allowed. However, i can share the following screenshot: