Veda2.0 Released!


simplified ~TFM_INS table
#1
In our ELC model, we’ve number of large ~TFM_INS data tables (see worksheet ‘current input data’ in the attached xls).  It becomes extremely difficult to manage/update these tables (have > 24 sheets of same size).  Can anyone propose a simplified ~TFM_INS table, e.g. year as colum head?  I have given the repeated parameter and attributes in the second worksheet)?
thanks


Attached Files
.zip   VFE-TFM_INS_table.zip (Size: 68.33 KB / Downloads: 14)
Reply
#2
Hi all,
It would be powerful to indeed have the possibility to include specific Years as column head in TFM_INS/TFM_UPD/TFM_DINS tables, similar as it is possible to include specific regions.
I understand specific years as column head are now only possible in a TRANS file but not in a SCENARIO file.
Is there any particular reason for this ?
This is particularly interesting for sensitivity analysis on technology learning and you do not want to use the multiplier but another excel formula.
Best
Wouter
Reply
#3
I do eco your suggestion, Wouter. An option to include year and attributes as column head in INS/UPD table would be very helpful to handle the data in excel

Reply
#4
VEDA FE already supports the following tags in Trans as well as regular scenario files:

~TFM_DINS-TS: like in regular DINS, all indexes must be specified and no wildcards or comma-separated input can be used.  Regions should be specified in a column with header=Region. Column headers in data area are YEARS.

~TFM_DINS-AT: like the above + column headers in data area are ATTRIBUTES.

~TFM_INS-TS: like the regular INS table. But Regions should be specified in a column with header=Region and columns in data area are YEARS.

Lets discuss specific examples where you think other tags, like INS-AT or UPD-TS will be helpful.
Reply
#5
Thank you Amit, this is very powerful !

We tried with ~TFM_UPD-TS and it did not work and concluded wrongly that the INS alternative would also not work in a regular scenario file. I do not see the need for ~TFM_UPD-TS as ~TFM_INS-TS seem to work and does not harm ?

So far no need for INS-AT or UPD-TS, however in a similar way to ~TFM_INS-TS, it might be very powerful to have something like ~TFM_FILL-TS.
This would allow a very flexible way of altering model inputs (on top of the multiplier functionality when one wants to multiply with a fixed number).

Best
Wouter
Reply
#6
So far, I have always preferred a UPD operation over the FILL route. I have used FILL only to move information to UCs. Can you post an example where you think FILL would be better than UPD?
Reply
#7

 I am also using TFM_FILL quite a lot, and I don't think I could use TFM_UPD for the purposes I need.

The problem is that the processing of TFM_FILL takes ages. Just a simple table with about 200 data points takes about a minute to fill, while in the VEDA Browser I am able to retrieve the same data many times faster.

I wonder how the TFM_FILL processing can be so slow? Anyway, it would be really nice to have a new ~TFM_DILL tag, which would retrieve the data into the Excel table as fast as in the browser (or at best, even faster). Would that be possible?

Reply
#8
We have not really focused on the EFF of FILL tables so far. We will do it soon.

This is a good time to discuss something else that I have needed/considered at times: having rule-based FILL tables. What if we have TFM_Fill-R, where one defines one or more rows of the usual VFE filters. No data will be expected here. The records that are shortlisted will be dumped as a flat table one row below this table. Then, one can use VLOOKUP/SUMIFS etc to pull information out of this dump. Of course, this will work MUCH faster than the regular FILL.

What do you think?
Reply
#9
Amit

I am looking for a ~TFM_INS/~TFM_UPD tables something like below, i.e. Milestone year as column header (like in ~FI_T table). Since we have many timeslices, I have to repeat the TS column in many table for each milestone year.


~TFM_INS                                             
TimeSlice     Region     Attribute     Pset_PN     Cset_CN     YEAR~2010     YEAR~2030     YEAR~2050     YEAR~2100     YEAR~0
WIN-WD-D01     CH     COST     IMPELC_AT,EXPELC_AT     ELC_I-AT,ELC_E-AT      15.46       44.49       43.46       43.57       1
WIN-WD-D02     CH     COST     IMPELC_AT,EXPELC_AT     ELC_I-AT,ELC_E-AT      14.75       42.44       41.46       41.56       1
WIN-WD-D03     CH     COST     IMPELC_AT,EXPELC_AT     ELC_I-AT,ELC_E-AT      14.37       41.35       40.40       40.50       1
WIN-WD-D04     CH     COST     IMPELC_AT,EXPELC_AT     ELC_I-AT,ELC_E-AT      13.97       40.19       39.26       39.36       1
Reply
#10

To Amit: Well, I just tested an example query, which retrieved about 8000 records from a medium-sized VEDA database. 

Executing the query took about 1 second.  Then I changed the query into a crosstab query, such that I get the regions as column headers.  Executing the query took again about 1 second (actually, it seemed even a bit faster).

My conclusion: I see no point shortlisting the records as a flat table, as a crosstab query that formats the table with column headers will not take any more time, and results in a highly more compact table.

Reply
#11
Hi there.
I do not follow the point on VFE filter. Could you give an example ?
To Kannan: I think you can simply use ~TFM_INS-TS  for what you want to do.
We are using it now and it is very handy !
Wouter

Reply
#12
This is an example:



Good luck with it
Wouter
Reply
#13
Thanks Wouter
I was unaware of ~TFM_DINS-TS option. Just now I tested and it works.
We should clean up our database!
Reply
#14
Once again, one of Antti's ideas has resulted in a powerful new feature in VEDA FE.

You can use this beta EXE to test.

And here is an example scenario file for the DEMO model.


Attached Files
.zip   Scen_UltraFastFill.zip (Size: 25.3 KB / Downloads: 31)
Reply
#15
Amit, Antti,
This is a great feature again.
It can do more than we asked for.

The link to the newest VEDA FE first did not seem to work, but by simply removing the "http\https" it worked.

A bit more explanation: this feature allows to call data from a model and put it in a table where years or regions (or even a combination) can be the column head.
The Excel file has 4 examples. The first sheet is used to define the selection of what you need. You have to put "~TFM_Fill-R:" , the sheetname and the column heads in the first row. The rows below define the selection of data you need.

I would recommend to use this to have faster import and for different types of sensitivity analysis.
Wouter

Reply


Forum Jump:


Users browsing this thread: 1 Guest(s)