question

Maryam H2 avatar image
0 Likes"
Maryam H2 asked ram edited

Get access to the OptQuest algorithm

How can I access and modify the optimizer's algorithm to test different results or implement alternative algorithms?

I found these two posts, but they are from 2017 and didn't answer my question.

https://answers.flexsim.com/questions/44888/is-there-a-way-to-access-andor-change-the-algorith.html

https://answers.flexsim.com/questions/34782/optquests-inner-workings.html

FlexSim 24.0.0
optquestalgorithm
· 2
5 |100000

Up to 12 attachments (including images) can be used with a maximum of 23.8 MiB each and 47.7 MiB total.

Joerg Vogel avatar image Joerg Vogel commented ·
@Maryam H2, what do you expect, that you can modify or which algorithm do you want to use?
0 Likes 0 ·
Maryam H2 avatar image Maryam H2 Joerg Vogel commented ·

@Joerg Vogel the first step for me is to understand how the current algorithm's logic in optimizer works. And then test some changes; for example if the current one is using GA to find optimal solution I want to change/modify fitness function, or use a completely different algorithm such as Tabu search and PSO and compare the results and see how use of algorithm/parameters can change the results.

0 Likes 0 ·

1 Answer

·
Jason Lightfoot avatar image
1 Like"
Jason Lightfoot answered ram edited

The answer about Optquest will be the same as Jordan posted in 2017 - nothing has since changed in that regard.

You may additionally want to explore reinforcement/machine learning for which FlexSim has some support in recent versions. (I've yet to see a convincing example of this being worth the effort over implementing a tailored/custom heuristic)

· 7
5 |100000

Up to 12 attachments (including images) can be used with a maximum of 23.8 MiB each and 47.7 MiB total.

Maryam H2 avatar image Maryam H2 commented ·

@Jason Lightfoot good idea! I'm also willing to explore the RL/ML applications in my model. what are the supports in recent version you are referring to?


0 Likes 0 ·
Jason Lightfoot avatar image Jason Lightfoot ♦ Maryam H2 commented ·

\The link I gave is 22.2 onwards - much later than the version17 links you included in the question.

0 Likes 0 ·
Maryam H2 avatar image Maryam H2 Jason Lightfoot ♦ commented ·

@Jason Lightfoot I see that in the RL training page the training algorithm is PPO (proximal policy optimization) which is a RL algorithm, however heuritics algorithms such as GA , Tabu search is not well alighned with this type of RL algorithms since heuristics are rule-based algorithms which replace the exhaustive search for optimal solution while algorithms like PPO optimizes the policy by performing gradient ascent on expected returns. Did you mean to improve current PPO / change of PPO to something else or really is there a way to implement heuristics algorithms and compare the outcomes?


0 Likes 0 ·
Show more comments
ram avatar image ram commented ·

How can I contact you @Jason Lightfoot

0 Likes 0 ·

Write an Answer

Hint: Notify or tag a user in this post by typing @username.

Up to 12 attachments (including images) can be used with a maximum of 23.8 MiB each and 47.7 MiB total.