question

Steven Chen avatar image
0 Likes"
Steven Chen asked Phil BoBo edited

Reinforcement Learning and Parameter Table Bound

Hello,

The observation variables in parameter table are not really bounded in the case I working on.

It's fine to set a large value as bound but I always wonder why parameter table must be strict? If unbounded variables need to save in global table, isn't it against the purpose to expose variables, because those bounded and unbounded variables were suppose to be on same table.

I think it's better to make optional lower bound and upper bound. What do you think?


By the way it will be nice to have these features in future, is this correct space to propose request?

Add "copy parameter" button to parameter table.

Listen to group instead object in Decision Events of Reinforcement Learning tool.

FlexSim 22.1.0
reinforcement learningparameters table
· 2
5 |100000

Up to 12 attachments (including images) can be used with a maximum of 23.8 MiB each and 47.7 MiB total.

1 Answer

Phil BoBo avatar image
0 Likes"
Phil BoBo answered Phil BoBo edited

You should normalize the observation space parameters when training RL algorithms. See Reinforcement Learning Tips and Tricks — Stable Baselines documentation (stable-baselines.readthedocs.io)

Machine Learning algorithms will learn better and faster if your observation space variables are normalized between [0,1] or [-1,1]. Using unbounded variables or variables with large, unknown ranges will cause the training to not work very well.

Machine learning isn't magic; it's math. Parameter tables are strict with bounds set on the variables so that optimization algorithms can work well.

You can copy/paste with Ctrl+C/Ctrl+V in a Parameters table to copy parameters.

I'll add a case to the dev list with the suggestion for listening to Groups.

· 2
5 |100000

Up to 12 attachments (including images) can be used with a maximum of 23.8 MiB each and 47.7 MiB total.