Veda2.0 Released!


Plausibility check
#1
Just recently my model began to behave really strange, as it imported scenario data from Excel
   
perfectly into VEDA-FE (attribute CAP_BND)
   
but then one single data point was not shown in VEDA-BE after the model run (attribute PAR_CapUP)
    .

As you can see this value of 45 GW for capacity upper bound for region 'DE' in year 2020 was simply missing, whereas all the other data were shown as expected.
So I began to search and, after a short while, I found out that (following a recent data update) the STOCK value for this technology in this region in this year was given with 45.169 GW - a tiny difference with huge consequences.

In such cases it would be really nice if VEDA-FE could at least show a warning (after importing) indicating that there is contradicting data in the model - instead of just overriding the former given value for CAP_BND.

Would this possible to be implemented anytime soon?

Thank you and cheers!
Fabian
Reply
#2
You say: "... a tiny difference with huge consequences".
I am curious as to what these huge consequences are. TIMES just removes the inconsistent CAP_BND and replaces it by a zero NCAP_BND. So, any new investments will be prohibited, if you set a CAP_BND(UP) lower than the existing capacity. But I am not able to see how there could be some huge consequences? Can you elaborate?
Reply
#3
Yes: I do automated post-processing with the results from the VBE csv files (consistency checks, derivation of indicators, all kinds of plotting e.g. pie charts with the capacities onto geographical maps, etc.)
In this case, I was deriving indicators showing how much of the allowed capacity has actually been used through dividing VAR_Cap by PAR_CapUp
(of course per process, per period, per region, summed etc.).
If PAR_CapUp is not reported consistently,  my post-processing script throws me warnings (since divisions by zero could occur or, in a summed case, the inidicators reported are just wrong, since they could be higher than one).

Therefore, it would just be useful to see such data errors before one performs model runs. That would save time, since the results of an inconsistent model run are trash anyway...
Reply
#4
Ok, I see your point.

But I disagree about "the results of an inconsistent model run are trash anyway... ".
It can be a convenient way to prohibit new investments for some technologies by setting CAP_BND=0. I am using it that way, and I am not expecting any warnings or infeasibility. In other words, the current behaviour is what I want. It is not a data error.

It would certainly not save time to get infeasible model runs and then trying to find the cause. If such inconsistent bounds would be left in place, that would be the case: Your runs would fail into infeasibilities, and you would not get any PAR_CapUp reported.  But I agree that either VEDA or TIMES could report such inconsistent bounds. Am I correct that you would prefer VEDA reporting them, and not TIMES?

I would also like to point out that PAR_CapUp is indeed reported consistently. It reports the CAP_BND applied in the model run. It would not be consistent to report the inconsistent bounds, because such a model run would fail.
Reply
#5
(19-01-2018, 03:38 PM)Antti-L Wrote: Ok, I see your point.

But I disagree about "the results of an inconsistent model run are trash anyway... ".
It can be a convenient way to prohibit new investments for some technologies by setting CAP_BND=0. I am using it that way, and I am not expecting any warnings or infeasibility. In other words, the current behaviour is what I want. It is not a data error.

It would certainly not save time to get infeasible model runs and then trying to find the cause. If such inconsistent bounds would be left in place, that would be the case: Your runs would fail into infeasibilities, and you would not get any PAR_CapUp reported.  But I agree that either VEDA or TIMES could report such inconsistent bounds. Am I correct that you would prefer VEDA reporting them, and not TIMES?

I would also like to point out that PAR_CapUp is indeed reported consistently. It reports the CAP_BND applied in the model run.  It would not be consistent to report the inconsistent bounds, because such a model run would fail.

Hi Antti,

Is that not reported in QUALITY ASSURANCE LOG? I would expect to find this kind of information there...

Best regards,
Olex
Reply
#6
No it is not reported there, because it is so obvious that an upper CAP bound can easily violate the existing capacity (intentionally or inadvertently), and such CAP_BND bounds are therefore just removed, and zero NCAP_BNDs are generated instead (which I think is usually exactly what is desired).

But sure, it could be reported in the QA_Check.log, if there is general interest for that, but Fabian wanted to have it reported already before running the model...
Reply
#7
(19-01-2018, 03:38 PM)Antti-L Wrote: But I disagree about "the results of an inconsistent model run are trash anyway... ".
It can be a convenient way to prohibit new investments for some technologies by setting CAP_BND=0. I am using it that way, and I am not expecting any warnings or infeasibility. In other words, the current behaviour is what I want. It is not a data error.
Okay, interesting that this has been intended behavior! But what I don't understand yet: When you want to prohibit new investments in some technologies, why you wouldn't you just set NCAP_BND=0 instead of CAP_BND ? At least to me, this would make more sense, because there can be remaining capacity in the model, right?


(19-01-2018, 03:38 PM)Antti-L Wrote: It would certainly not save time to get infeasible model runs and then trying to find the cause. If such inconsistent bounds would be left in place, that would be the case: Your runs would fail into infeasibilities, and you would not get any PAR_CapUp reported.  But I agree that either VEDA or TIMES could report such inconsistent bounds. Am I correct that you would prefer VEDA reporting them, and not TIMES?
Sure, I absolutely agree on that. Getting infeasible model runs is always painful and no option at all. So either put in dummy variables (such as IMPNRGZ) or directly alert the user.
Yes, in my case I would indeed prefer VEDA-FE reporting them (maybe not as an error, but at least a warning).

(19-01-2018, 03:38 PM)Antti-L Wrote: I would also like to point out that PAR_CapUp is indeed reported consistently. It reports the CAP_BND applied in the model run.  It would not be consistent to report the inconsistent bounds, because such a model run would fail.
(19-01-2018, 08:03 PM)Antti-L Wrote: No it is not reported there, because it is so obvious that an upper CAP bound can easily violate the existing capacity (intentionally or inadvertently), and such CAP_BND bounds are therefore just removed, and zero NCAP_BNDs are generated instead (which I think is usually exactly what is desired).

But sure, it could be reported in the QA_Check.log, if there is general interest for that, but Fabian wanted to have it reported already before running the model...
Hm, we might have a different view on the programming paradigm... In my view a fail-safe program should always get back in touch with the user as soon as it notices inconsistencies that the user is responsible for before doing anything else. For example "It seems like you wanna print a page that does not fit onto the chosen paper size. Are you sure that you want to do it?", "The file you are trying to open does not match with the expected form of this file extension, do you really want to open it?", etc. 
Back to the modelling: I think TIMES should not override anything or even remove given bounds. The user has had a certain intention when he set up the bounds.
So rather point the user friendly to this issue "Hey, there are parameters given that would lead to an infeasible model run. Do you want TIMES to remove the inconsistent bounds for you? If no, please click 'Abort' and check the given scenario data before we can proceed." Or something like that  Wink
Reply
#8
Hmm... yes, I agree on your views on "fail-safety" in a more general setting, but not so in this specific case.

I also agree that the user has had a certain intention when he set up the bounds. But I don't think the intention can ever be getting an infeasible model, and so there is absolutely no point in running such a model; it would only waste a lot of time. In my view, the intention would practically in all cases be that the upper bound should be the maximum of the existing capacity and the bound defined, which is equivalent to what TIMES does to it.

TIMES removes also other inconsistent bounds, but reports those also in the QA check log. It is not my design, but the design of the ETSAP working group when TIMES was developed.

As the current maintainer of TIMES, I can help by adding the QA_check.log warning for the CAP_BND case as well.  Unfortunately, it will not be of much help to you, as you wanted to have the warning already before the run, which is not in the scope of TIMES but VEDA.
Reply
#9
Thanks for taking care of this, Antti! This way it will be also consistent with e.g. the relaxation in case of UP and FX shares being less than 1, etc. :-)

In general, I think it is very important to reflect any kind of relaxation or other adjustment of user-specified inputs in the log to ensure transparancy and feedback to the user. Otherwise, especially new unexperienced users might think that TIMES is not working properly. Actually, is there a switch to turn off this functionality that makes the model feasible despite the inconsistent input from the user? Based on my experience with 90 students over the last 3 weeks, I think such a switch could facilitate deeper learning and reflection. Their models were quite small, so it did not take much time to run them; and in case of infeasibility, the infeasibility finder would let them know right away where the problem was. I am thinking about the switch mostly for educational purposes.

@Kanors, could VEDA_FE draw user attention to the QA log after a run is finished, in case the log is not empty (e.g. by putting a message like "Check the QA log" in red under the objective function value or displaying an "Open QA log" button next to OK)?

Cheers,
Olex
Reply
#10
Very well, I think this discussion goes into the right direction  Smile
To sum it up shortly:
We all agree that it does not make any sense to run an infeasible model! Thus, if TIMES notices that infeasible equations had been set up, somethings needs to be done before the model run. When it comes to the capacity equation, currently, TIMES removes infeasible bounds by replacing them with the maximum installed capacity, but it does so silently i.e. without letting the user know.
In a first step, it could be good to integrate a warning into the GAMS\WrkTIMES\QA_Check.log file indicating which bounds have been removed or replaced.
In a second step, it would be nice if VEDA-FE could report a non-empty QA_Check.log file.
Reply
#11
(22-01-2018, 01:13 PM)fg Wrote: Very well, I think this discussion goes into the right direction  Smile
To sum it up shortly:
We all agree that it does not make any sense to run an infeasible model! Thus, if TIMES notices that infeasible equations had been set up, somethings needs to be done before the model run. When it comes to the capacity equation, currently, TIMES removes infeasible bounds by replacing them with the maximum installed capacity, but it does so silently i.e. without letting the user know.
In a first step, it could be good to integrate a warning into the GAMS\WrkTIMES\QA_Check.log file indicating which bounds have been removed or replaced.
In a second step, it would be nice if VEDA-FE could report a non-empty QA_Check.log file.

Fabian, small correction: I think it does make sense to run a small infeasible model for educational purposes. The infeasibility finder would let the user know which values make the model infeasible which in turn will give an opportunity for reflection. ;-)
Reply
#12
I fully agree with Olexandr about the pedagogical value of infeasibilities. VEDA can display the QA_Check.log file, but I suggest to do so only if it contains items above a certain severity level. There may be some mild warnings that I am willing to live with, and I would not like it to pop up each time I send a run.
Reply
#13
Ok Amit, displaying this warning in VEDA "above a certain severity level" is absolutely fine to me. As long as it is written into the QA_Check.log where the user can see in detail what is happening.
Reply


Forum Jump:


Users browsing this thread: 1 Guest(s)