question

mark zhen avatar image
0 Likes"
mark zhen asked Kavika F commented

Rewriting the Reward Function

@Kavika F @Jason Lightfoot @Felix Möhlmann

I have now split my reward function into three

But I think it should be similar to the example reward function but I don't know how to rewrite it.rltest-2 (1).fsm

1669726637469.png

Also, I'm stuck in training now, and I want to ask how to solve it

I think my reward function should be a reward function in the form of a matrix so how do I rewrite

1669796990447.png

I am currently writing like this but there are still many problems and I can't find any solution

1669800190538.png

I have no way to execute smoothly in training.py.

FlexSim 22.0.0
reinforcement learningreward function
1669726637469.png (153.4 KiB)
1669796990447.png (39.5 KiB)
1669800190538.png (119.8 KiB)
rltest-2-1.fsm (281.8 KiB)
· 3
5 |100000

Up to 12 attachments (including images) can be used with a maximum of 23.8 MiB each and 47.7 MiB total.

mark zhen avatar image mark zhen commented ·

@Kavika F

I want to discuss the matter of maximizing throughput. My reward function has been rewritten. I also want to ask why my model is connected to the agent to solve the connection model. Something that I can't understand.123.fsm

0 Likes 0 ·
123.fsm (289.3 KiB)
mark zhen avatar image mark zhen mark zhen commented ·

This is what my connection proxy looks like.
1670312937670.png

but when i disconnect the proxy my model becomes like this

1670313012422.png

0 Likes 0 ·
1670312937670.png (288.0 KiB)
1670313012422.png (263.5 KiB)
Kavika F avatar image Kavika F ♦ commented ·

Hi @mark zhen ,

Were you able to solve your problem? If so, please add and accept an answer to let others know the solution.

If we don't hear back in the next 3 business days, we'll assume you were able to solve your problem and we'll close this case in our tracker. You can always comment back at any time to reopen your question, or you can contact your local FlexSim distributor for phone or email help.

0 Likes 0 ·

0 Answers