I would be curious to have your feedback on a modeling practice. I am struggling with multiple declarations of the same attribute in different Excel input files, which sometimes get confusing when I want to update a value. An example is that I can define, for instance, act_cost in bnewtech and also in a regular scenario to update this. This could lead to a mixing of data in the case of using an interpolation function if the two files declare cost for different periods (e.g., 2030, 2050 in bnewtech and 2040, 2060 for regular scenario). I would like in the future to not have duplicate declarations of an attribute. The notion of duplicate declaration makes sense at the case level. A model can include multiple regular scenarios with different assumptions.
I am not very familiar with the TIMES code, but it seems that at the step where all the dd are read, it is the last read value that is kept in case of a multiple declaration. Do you know how I could get a warning in case a value is overwritten at the step where GAMS is called? I can do the investigation manually with the browse function, but I would like to have a more automated way.
08-04-2024, 07:29 PM (This post was last modified: 08-04-2024, 07:30 PM by AKanudia.)
I am sure I can give you some useful suggestions, but I am not able to understand the problem very well. Allowing alternate values for the same attribute is core feature of Veda that is very powerful if used well. I think you acknowledge that in "it makes sense at the case level". I don't understand what type of duplication is bothering you. And is it only about locating the right seed value for updating, or merging of different years of timeseries...
Thanks, please find attached an example of a mixing of NCAP_COST data for the process ELCTNGAS00. Here there are two issues: mixing of values of both sources and 2050 value is overwritten by the updating regular scenario in the case elctngas00_invcost_update . I would like to have a warning telling me I had defined twice ncap_cost and that I am mixing with values with two sources.
Concerning GAMS, there is no practical way in GAMS to detect such multiple definitions.
There is of course the $OFFMULTI setting, but that would prevent any symbol from being redefined, which would prevent basically the whole concept of scenarios, and so cannot be used.
However, under VEDA you can use the TS_Filter mechanism to be sure that no mixing of time-series data occurs between scenarios. It can be used on a per time series basis, and is quite easy to use. It comes in two flavors: Filter out values in all preceding scenarios (TS_Filter=0) or filter out values in all other scenarios (TS_Filter=1). Perhaps you might find that feature useful?
09-04-2024, 07:17 AM (This post was last modified: 09-04-2024, 10:47 AM by AKanudia.)
Responsible UPDating has two rules - 1. You should have a feel for the seed values 2. You should confirm the output in browse or items view.
If you don't have #1, then you MUST check the potential seed values in browse, where you can immediately spot if the values come from multiple scenarios. #2 is recommended for all new tables, but it is a must for the ambitious ones.
Another idea would be to build a dedicated scenario file for a comprehensive view of such cases. TFM_FILL_R: hcol=attribute;w=attribdata; attribute=INVCOST; pset_pn=*; value=*1
You can be creative with the resulting table using formulas, macros, or pivot tables (COUNT).
The FILL table idea doesn't work because these tables return the "last" source when multiple are found.
We will think about a sync log warning if a single row of UPD/MIG tables uses seed values from multiple scenarios. If this check impacts the performance then we will give users the choice to disable this check.
(08-04-2024, 09:08 PM)Antti-L Wrote: Concerning GAMS, there is no practical way in GAMS to detect such multiple definitions.
There is of course the $OFFMULTI setting, but that would prevent any symbol from being redefined, which would prevent basically the whole concept of scenarios, and so cannot be used.
However, under VEDA you can use the TS_Filter mechanism to be sure that no mixing of time-series data occurs between scenarios. It can be used on a per time series basis, and is quite easy to use. It comes in two flavors: Filter out values in all preceding scenarios (TS_Filter=0) or filter out values in all other scenarios (TS_Filter=1). Perhaps you might find that feature useful?
Thanks. I tried the $offmulti parameters as a header of DD files, but it generates many errors that are not explicit. I agree this is not a workable solution. I have difficulty relating the GAMS description of these control options to TIMES and clearly understanding what a symbol is.
I did not know about TS_Filter, which would be interesting in case I want to ensure that there is no overlapping of time series. However, I would be more interested in a method that raises warnings of duplicate definitions. I have not found references to ts_filter in VEDA documentation; do you know where I can find a description, please?
09-04-2024, 10:09 PM (This post was last modified: 09-04-2024, 10:10 PM by VictorG.)
(09-04-2024, 07:17 AM)AKanudia Wrote: Responsible UPDating has two rules - 1. You should have a feel for the seed values 2. You should confirm the output in browse or items view.
If you don't have #1, then you MUST check the potential seed values in browse, where you can immediately spot if the values come from multiple scenarios. #2 is recommended for all new tables, but it is a must for the ambitious ones.
Another idea would be to build a dedicated scenario file for a comprehensive view of such cases. TFM_FILL_R: hcol=attribute;w=attribdata; attribute=INVCOST; pset_pn=*; value=*1
You can be creative with the resulting table using formulas, macros, or pivot tables (COUNT).
The FILL table idea doesn't work because these tables return the "last" source when multiple are found.
Thanks. It seems my problem is related to good coding practices and code smell. I realize I am not using all VEDA features; I am not very familiar with TFM_FILL and tend to use TFM_INS instead of TFM_UPD.
I agree that the modeler is responsible for the quality of the coding and should double-check with browse and so on, but this manual process is not always without human errors, which justifies the use of warnings and automated tests on top of visual checks. It seems your solution with TFM_FILL would go in the direction of automated checks but does not work, unfortunately.
I noticed in the open TIMES models the use of prefixes for ordering regular scenarios in the navigator. The order may play a role for readability and also in case data is overwritten in multiple declarations. I found it more convenient for me to declare values of an attribute only once and do the updating of values (eg when the source changes) directly within the regular scenario and keep track of the changes with version control tools. However, this is not 100% robust, and I would like to find a way to automatically check each time I run a case and report duplicate definition warnings. Anyway, I will think about these questions and let you know if I find something I find convenient.
>However, I would be more interested in a method that raises warnings of duplicate definitions.
Yes, I understand. But I think the TS filtering can be useful when you know that you don't want to have duplicate definitions, because it filters out the data defined for the same time series in other scenarios.
>I have not found references to ts_filter in VEDA documentation; do you know where I can find a description, please?
I don't know where it would be documented; the VEDA support team should be able to tell. But it is listed as a supported column for TFM_INS / TFM_DINS / TFM_INS-TS / TFM_DINS-TS under the VEDA tags in the online help.