question

Craig DIckson avatar image
2 Likes"
Craig DIckson asked anthony.johnson edited

Experimenter 2021 and Dashboards

I have some existing models that use dashboard elements (e.g. editable text, table), connected to global tables, to provide inputs to the logic. When I wanted to use the experimenter, I could refer to those same global table cells to create a scenario.

In the new experimenter I have to define a parameter, and I am struggling to understand how to connect the parameter to the global table. For a variety of reasons I do not want to replace the global tables with parameter tables. I am hoping that I have missed something and you can tell me how to use parameters with global tables.


FlexSim 21.1.2
experimenterflexsim 21.1.2
· 5
5 |100000

Up to 12 attachments (including images) can be used with a maximum of 23.8 MiB each and 47.7 MiB total.

Craig DIckson avatar image Craig DIckson commented ·

I built some simple models to see if I could figure it out. This is what I think I learned. Can someone (@anthony.johnson?) confirm?

  1. To use the experimenter, you have to define parameters. There is no other option.
  2. If you make a parameter point at another data structure (e.g., a cell in a global table, a global variable) then that parameter becomes the only way to actually change that value, **even when you are not using the experimenter** i.e. if you directly change the value of a cell linked to a parameter, the value will change back to the value set in the associated parameter when you reset the model, **without telling you**
  3. In other words, parameters are always unidirectional, from the parameter to the linked node. This is unlike using "Edit" or "Table" elements on a dashboard, which are bi-directional: I can change the value on the dashboard or I can change it directly on the linked table cell, and the effect is the same.
  4. Parameter tables are always one dimensional.
  5. Parameter Dashboards allow only one parameter table per dashboard. Also, the parameters in that table can't be rearranged on the dashboard (except possibly re-sorted by re-sorting the underlying parameter table).
  6. I can link a parameter to an "Edit" element on a dashboard. This sort-of gets what I need (but with an extra non-intuitive step).
  7. However I can no longer use the "Table" element on a dashboard if I end up linking any of the cells in the table to a parameter. Instead I have to link each individual cell to an "Edit" element on the dashboard.
0 Likes 0 ·
Phil BoBo avatar image Phil BoBo ♦♦ Craig DIckson commented ·

1. Previously, to use the experimenter, you had to define scenario variables. There was no other option.

2. If you choose to point a parameter to another data structure, then yes, that parameter controls that data structure. If you want to get the parameter's value from another data structure, then you should select an Expression type for that parameter and get its value from the other data structure. It is flexible.

3. Only if you choose to use them that way.

4. Parameter tables now have several different available Types, including the Option and Expression type. Previously, the scenario variables could only be defined using strings that were evaluated as FlexScript. Now, you can specify various types and ranges for each parameter.

5. Parameter dashboards are just a new shortcut way to create a bunch of model input widgets easily and quickly that automatically point at your parameters using their defined constraints and types. If you want to continue to make your own dashboards and edit each widget one at a time, you are welcome to continue to do it that way.

6. I don't see much difference between linking a dashboard edit input field to a parameter vs another node's value. You can use the sampler either way.

7. If you choose to link up your parameters to overwrite your tables, then sure, you'll need to link your dashboard to your parameter. If you edit the table directly with a table dashboard input, then it will be overwritten. If you want the table to be the main input to your parameter, then configure your parameter to read that table with an expression instead of writing that table with the Reference and On Set triggers.

0 Likes 0 ·
Craig DIckson avatar image Craig DIckson Phil BoBo ♦♦ commented ·

@Phil BoBo WRT 3, can you walk me though it? I was unable to make it work such that I could create a parameter for use by the experimenter, without it then overwriting my table when I did not want to use the experimenter and wanted to use the table (or a dashboard table pointing at the table) as my primary input instead. (A list of parameters is not a useful input format for an executive!)

0 Likes 0 ·
Show more comments
anthony.johnson avatar image
3 Likes"
anthony.johnson answered anthony.johnson edited

Here's another alternative to get it working the way it used to. I think the main change that's messing up your workflow is that model parameters, by default, assert their "scenario" when you reset the model, i.e. it sets the global table value, whereas before (when they were just experiment variables) they did not. We made this change for what we believe to be very good reasons. However, it appears that no matter our reasoning for this change, you strongly disagree. So rather than re-argue our side, here's a solution that should essentially get you back to the way FlexSim used to work.

Make the first parameter in the table a binary parameter called UsingExperimenter. In the table itself, give it a value of 0. Then, for every other parameter, make the condition of its OnSet trigger be that UsingExperimenter must be 1. Then, when you define experimenter scenarios, make sure UsingExperimenter is 1 in every scenario.

This way, the model will only "assert" a given scenario, i.e. set the global table value, when the experimenter is being used. Thus you get all the same functionality you got before, like dashboard table views, editing the global table values directly, etc. and you can use the experimenter to experiment with these values.

In the meantime, Jordan is mulling over various options for better linkage between model parameters and node values in the model, similar to your bi-directional suggestion.

ChangeTableInExperimentOnly.fsm


5 |100000

Up to 12 attachments (including images) can be used with a maximum of 23.8 MiB each and 47.7 MiB total.

Jordan Johnson avatar image
2 Likes"
Jordan Johnson answered Phil BoBo commented

Hi Craig, I can confirm these details:

  1. Yes, parameters are the only way to use the Experimenter (and optimizer) now. As we add additional features that need to manipulate input values (such as AI integration, or maybe better default web interfaces), those features will also rely on the parameter tables. We have promoted the practice of making parameter tables to a supported feature.
  2. Yes, parameters become the controlling values for the model. That is their purpose. Many parameters set various things in the model on reset. The update script we wrote adds parameters that set node values in global tables; the parameters control the table values. Usually, however, the user will create the parameters themselves, and so they won't be surprised when things change in their model on reset. Note that if you set the "Default Reset Scenario" in the old experimenter, you have the exact same problem; the experimenter suddenly controls the model, not the global tables, and values change on reset without telling you.
  3. A global table is just a view to a bunch of nodes. So using a table view or a dashboard or an edit all changed the node value. The same is true for parameter tables, except they (possibly) have OnSet code that fires whenever they change, and also on reset. It would be like attaching a trigger to a cell in a table somehow.
  4. If by "one dimensional" you mean that they are a list of inputs, then yes, that is true. The experimenter and optimizer (and future features) expect to change a list of input values, so parameter tables provide that list.
  5. That is correct; it's a one-to-one relationship between parameter tables and parameter dashboards at the moment.
  6. You can link an edit. But as you can see with the parameter dashboard, you can link other types of controls as well, that may be more suited to the parameter's possible values (like a checkbox for binary values).
  7. This is something on our minds. Maybe it would be good to have a table view of a parameter table on the dashboard. And it is technically possible now, but that table view lets you do a bunch of things that would break parameter tables, so we don't really want people to do that just yet.

It isn't necessary to have OnSet logic in a parameter. You can configure your model to simple "read" the value at certain points. For example, you can have a Scheduled Source in Process Flow create a number of tokens based on a model's parameter. Or you can have a processor get a parameter's value as a process time (Option and Expression parameters are useful in these cases). You can even write code in the On Reset of your model that reads the parameters, and configures the model that way. OnSet triggers can be very convenient, however.

To return control to a global table, you'd need to make a custom button or user command, and write code that sets parameter values according to a global table. But you'll need to press that button manually; parameters fire their OnSet triggers before anything else happens in the sequence of logic that happens when you click the reset button.

· 25
5 |100000

Up to 12 attachments (including images) can be used with a maximum of 23.8 MiB each and 47.7 MiB total.

Jordan Johnson avatar image Jordan Johnson ♦♦ commented ·

I hear your frustration on change. It is frustrating to upgrade and not have the same kinds of things work as used to work. We actually try to get lots of feedback (including from our own internal consulting department, our distributors and their customers, as well as from tech support). I say that because what our users think and want is important to us.

I think I understand your complaint. Before, you set up your model with whatever global tables you wanted. Then, you could make easy dashboards for changing values in those tables. Then, when you wanted to run an experiment, it was easy to add a flat list of all the nodes you wanted to change in a global table. Now, that last step has changed, and you have to add parameter tables, and since they change your table values, it makes your table widgets in the dashboards useless. So now it feels like you are choosing between making a nice UI (the way you are accustomed to) and using the experimenter effectively, where before you didn't really have that trouble.

As Phil suggested, an Expression or Option parameter may be the way to go for you. Take the process time situation for example. You could make a Expression parameter, and in the Parameter Table in the model, you could give it this value:

[Table("GlobalTable1")[1][1], Table("GlobalTable1")[1][2],Table("GlobalTable1")[1][3]] 

To use this parameter you'd write code like this:

Array args = Model.parameters.Type1Time;
return triangular(args[1], args[2], args[3], getstream(current));

Then, in the experimenter, you can add the parameter that exists in your model, and set its value to whatever you want:

[1, 10, 5] // scenario 1
[2, 8, 3] // scenario2

With this approach, your global tables (and dashboards) control the model while you are running the model for your execs, and the Experimenter can still set the values to whatever you need.

I made a demo model that uses this approach. I actually think it was easier to verify that my scenarios were specifying the correct arguments for the distribution this way.

ParamTableDemo.fsm

Note that if you want to use the optimizer, you'll need to use an Option parameter to specify the set of possible expressions. However, I don't think your current approach would work in the optimizer, without specifying dozens of constriants like:

Type1Min <= Type1Mode <= Type1Max

Otherwise the optimization will make no sense. If you go the Option route, then the first option can be "Use Global Table Values" or something like that. Note also that people don't generally optimize process times.

2 Likes 2 ·
paramtabledemo.fsm (137.4 KiB)
Craig DIckson avatar image Craig DIckson Jordan Johnson ♦♦ commented ·

@Jordan Johnson To be clear, my frustration isn't with change, it's with lost functionality. I understand and actually support your goal of making the experimenter more straightforward. But it seems you did that at the expense of dashboards. I can't imagine I'm unique in using both in the same model.

I don't immediately understand your workaround but I'll take a look at it. But it does feel like a workaround, trading one annoyance (using the experimenter before parameters) for another (using dashboards with parameters).

0 Likes 0 ·
Phil BoBo avatar image Phil BoBo ♦♦ Craig DIckson commented ·

It is only a "workaround" because your question has the limitation "For a variety of reasons I do not want to replace the global tables with parameter tables."

If you replace your global tables with parameter tables and link your dashboard inputs to parameters, then all the complication in this thread goes away and there are no more "workarounds."

They only seem like workarounds because we're trying to explain how to do something within your particular set of limitations. If you use parameter tables as they are designed, then you don't have this problem. This isn't a general problem to everyone who uses the software; this is a specific problem for you in trying to continue to use old paradigms within the new system.

-2 Likes -2 ·
Craig DIckson avatar image Craig DIckson commented ·

@Jordan.Johnson, as it stands now, without significant rework I cannot use the new experimenter with any of my existing models that get inputs from tables or dashboards (and that is most of them).

While parameters are inherently one-dimensional, many of the inputs in my models are in two dimensional tables because that makes sense with the data. Process times are a perfect example. Assume there are 12 item types, and each item type has a different process time, and that process time is triangular with a min, mode, and max, the right way to show and input that data is a table of twelve rows and 3 columns. (Or even more columns if there are other parameters to set also.) A list of 36 (or more) parameters is a poor way to display that data and would just invite mistakes. (Keep in mind that my models are often also used by project managers and executives, so I need to make the data as clear and intuitive as possible, but I still need to use the experimenter.) I could create three parameter tables - one for the mins, one for the modes, and one for the maxes -- but really that would be just as silly and hard to read, especially since AFAICT the three parameter tables couldn't be shown on the same Parameter Dashboard. Or I guess I could have twelve different parameter tables, one for each type? At least then the min, mode, max would be together, but still, 12 tables when I really need one? LOLOL. And besides the inability to display inputs in a useful way, if I use parameter tables as described in the documentation it complicates coding since I can't refer to the parameters in two dimensions (e.g. Table("tableTimes")[token.itemType]["MAXIMUM"] )

Using the above example, it used to be simple to make a useful dashboard for the executives (and clearly a list of 36 items isn't useful for them), while still allowing me to use the experimenter:

  1. Create a dashboard and drag a "Table" element on to it from the library.
  2. Use the dropper to sample the 12 x 3 table that my logic uses.

With FlexSim 2021 it's far more complicated and mistake prone:

  1. Create a parameter table with 36 parameters (or 3 tables with 12 parameters, or 12 tables with 3 parameters)
  2. Manually name each of the 36 parameters in a way that conveys the tabular nature of the information, without making a mistake
  3. Connect each parameter to the corresponding cell in the 12 x 3 table my logic uses, without making a mistake.
  4. Set the range limits for each of those 36 cells
  5. Create a dashboard
  6. Drag 36 "Edit" elements on to the dashboard
  7. Arrange those 36 items to be usefully displayed, i.e. in 12 rows of 3
  8. Connect each one of those 36 elements to the corresponding parameter, without making a mistake
  9. Drag at least 15 (12 + 3) "Static text" elements onto my dashboard to label the inputs. (In the current model the global table row and column names provide the info.)

Using the experimenter itself isn't meaningfully different in either case. And by the way, I do actually like the new way to work with performance measures.

What makes this particularly challenging is that not every input in a model gets needs to be a parameter for the experimenter, and it is hard (impossible) to know which ones will, especially if the system develops over the course of the modeling project. (They always do!) Many of my models have hundreds, if not thousands of inputs. (They're big models of both the equipment, equipment algorithms, and WMS/data processing logic for large distribution centers.) So either I spend the time to do all those 9 steps for every possible input cell up front (but only use a fraction of them), or I rework the dashboards every time a team member needs a different experimenter run.

Practically speaking, I guess I am at a loss as to why making the connection from a parameter to a data cell bi-directional would be a problem -- similar to the relationship between a table and a table element on a dashboard. If you did that I'd be perfectly happy. Seems like you'd still be able to get a nice simple list for the experimenter to use, but without forcing the human user to work in a list also (or jump though hoops to avoid it). Perhaps the parameter table should have a checkbox similar to the "Reset Default Scenario" on the old experimenter? (Which I NEVER used, BTW. I honestly can't think of a situation where I would want it.) If you have another way that easily lets me both have input dashboards and use the experimenter, I am all ears.

Emotionally speaking I find this incredibly frustrating. New versions of software aren't supposed to remove functionality, but FlexSim 21 has, both here and with conveyors. It may be true that the new way is "better" in a computer science sense but if new way breaks things that existing users actually use, are they really better? (Particularly because FlexSim users tend to be experts in their own field (materials handling, medical processing, robotics, etc.) not in computer science.) I have used FlexSim since version 16 (and competing simulation tools for many years before that) and I always looked forward to updates. Now I will dread them. I sure hope that you find out what your users actually use -- and how -- before the next round of changes.

0 Likes 0 ·
Phil BoBo avatar image Phil BoBo ♦♦ Craig DIckson commented ·

Your steps in your example skip all the steps required to add those 36 values to the old experimenter.

The old experimenter only changed 1 value per row, just like there is only 1 value per row in a Parameter table.

You entirely skipped the steps where you have to specify each of those values in the experiment table.

This is how your example would look before:

This is how it can look now:

You'll notice that I didn't even finish filling out the experimenter definition for the "old way" because it was taking too long and error-prone. The new way simply required specifying the expression in the parameter table instead of the activity and then selecting that parameter in the activity. Much easier, faster, and less error-prone.

In this way, that table is now the driving input for the interactive model run and the experimenter. You aren't getting surprised when your experimenter is returning unexpected values. What you see is what you get now instead of the experimenter's scenario being a mystery where you hope it behaves how you expect. Now you can see exactly what the model is doing for a given replication when you run interactively without the experimenter.

0 Likes 0 ·
1620665014537.png (215.3 KiB)
1620665023372.png (164.0 KiB)
1620666652599.png (164.4 KiB)
Craig DIckson avatar image Craig DIckson Phil BoBo ♦♦ commented ·

@Phil BoBo That's a false equivalency. The difference is in the old way, **I didn't have to add all 36 to the experimenter** because I don't experiment with all 36. As I pointed out, with a decent size model, it's not clear at the start what factors will be sensitive enough to warrant multiple scenarios. The dashboard helps us narrow down the set of experiments to run. Then I put those - and only those - into the experimenter.

The new way, I have to set absolutely everything up that way from the start or face a huge amount of rework.

As I told @Jordan Johnson a few minutes ago, I'm all for making the experimenter beter, with parameters. But why can't you do it in a way that doesn't basically deprecate your existing input dashboard features? They are really two different features but you're not treating them that way.

0 Likes 0 ·
Show more comments
Phil BoBo avatar image Phil BoBo ♦♦ Craig DIckson commented ·
What makes this particularly challenging is that not every input in a model gets needs to be a parameter for the experimenter, and it is hard (impossible) to know which ones will, especially if the system develops over the course of the modeling project. (They always do!) Many of my models have hundreds, if not thousands of inputs. (They're big models of both the equipment, equipment algorithms, and WMS/data processing logic for large distribution centers.) So either I spend the time to do all those 9 steps for every possible input cell up front (but only use a fraction of them), or I rework the dashboards every time a team member needs a different experimenter run.

That sounds like a complaint about the old way of defining experimenter scenarios.

With the new Parameter Tables, you can simply check or uncheck which parameters you want to use as experimenter scenario inputs rather than having to muck around with the scenario table every time something changed using the old experimenter way.

0 Likes 0 ·
Craig DIckson avatar image Craig DIckson Phil BoBo ♦♦ commented ·

@Phil BoBo As far as I can tell, once I define a parameter (so I can use it when I work with the experimenter, which is not all the time) the parameter becomes the only way for a user to work with that value -- which is extremely inconvenient when I want to use the dashboard instead of the experimenter. (LOL there's no way I'm going to teach my executives or project managers how to use the experimenter, old or new, or ask them to work with a list of 150 parameters.)

If I am wrong on that, can someone please show me how to do it easily?

I had no love lost for the old experimenter, but at least I only had to set it up for the things I needed to experiment with (as opposed everything I **might** have to experiment with, which is a much larger set), and once I did it wouldn't affect anything else. Unless there's something way deep in the bowels of the underlying code (which isn't my problem), I just don't see why parameters have to be treated as the be-all and end-all of data entry.

0 Likes 0 ·
Show more comments
Jason Lightfoot avatar image Jason Lightfoot ♦♦ commented ·

The AI interface needs to be at a lower level/ more flexible than just parameter tables. It should be able to learn and make decisions as the model runs, during the run, repeatedly. If this is the reason of changing everything then it's short-sighted/misguided.

-2 Likes -2 ·
Jordan Johnson avatar image Jordan Johnson ♦♦ Jason Lightfoot ♦♦ commented ·

Parameters are designed to allow change during the model run. There is nothing in parameters that forces the change only at the beginning; right now, most pick options assume that they are only applied on reset. But the design is meant to allow things to change during the model run. In addition, it is not the only reason for this change.

2 Likes 2 ·
Craig DIckson avatar image Craig DIckson Jordan Johnson ♦♦ commented ·

@Jordan Johnson @jason.lightfoot The issue isn't whether they can change during the run -- of course they do -- the issue is how a user can change the value manually before a manual run, in a place where it makes sense to change it (ie a global table or a dashboard) without the parameter then overwriting that desired value on reset or model start. If I am using the experimenter, then of course the value in the scenario takes precedent and it reaches out and sets the value in the parameter table, which the sets the value in the connected global table cell (or I guess you can use the parameter directly if you want to be limited to 1d arrays like late-70's FORTRAN). But when I am not using the experimenter, as it stands now, the value set in the parameter table will always overwrite any value my user sets in the global table. That is where my problem lies. I do not want to be forced to always only use parameters for data entry in order to use the experimenter occasionally.

You already have exactly the behavior I need elsewhere in FlexSim. When you connect an edit box on a dashboard to a cell in a global table, they become essentially the same place; each is a window to the other.

I keep hoping that maybe what I need is already there and I just haven't found it. If so, my apologies, and someone please show me. But if not, it's ridiculous. There is no possible way I am the only user who uses both input dashboards and the experimenter in the same models.

0 Likes 0 ·
Show more comments