As the title suggests, I have been trying to figure out how, and if, you can run a TIMES model on a Linux server, on which GAMS is already installed. I've moved the GAMS_SrcTIMES.v.4.8.1 and GAMS_WrkTIMES folders onto the server, and wanted to test if running a vtrun file from one of the models within GAMS_WrkTIMES, was possible.
Since I am on Windows, my vtrun file is a .cmd file, which naturally doesn't run on Linux.
Does anyone have any experience with the feasibility of the above, and/or a way to convert the .cmd file to a executable .sh file? I have attached a .txt version of the vtrun file, if that's in any way helpful.
I have been looking around on the forum and haven't found a similar question, but if this has already been answered before, let me know.
15-01-2025, 07:53 PM (This post was last modified: 16-01-2025, 03:14 PM by Antti-L.)
This is the VEDA Forum and VEDA works only on Windows.
TIMES can be run "manually" (without VEDA) both under Windows and on Linux. I have myself no experience on running it on Linux, but one should be able to run the TIMES Demo easily on Linux (just with the command gams rundemo) see: TIMES Demo.
Running TIMES directly with GAMS is very simple at a console window. And so, if you are a Linux user, it should be easy for you to make a shell script for it. But anyway, the rundemo.gms file of the TIMES demo shows the command line both for running TIMES with GAMS and for running gdx2veda for producing the result *.vd* files. One important aspect for Linux usage is adding the option filecase=4 into the GAMS command line.
Example command for running a TIMES case "casename" with GAMS (assuming the DD files are in the current dir): gams casename.run idir1=../timesv484 ps=0 gdx=casename filecase=4 r=../timesv484/_times.g00
The restart file (r=...) can be omitted if you have a full GAMS license.
Thank you Antti for the hints. Evangelos @PSI, with colleagues from UCC, has already done similar things.
We are currently in the same process in my group, so I would be happy to have a bi/multilateral exchange (this is indeed the VEDA forum here). Also, and in case it helps, we are installing WSL on our Windows server, so we can prototype what we need before going to the shared HPC infra.
@Victor:
Thanks, that doc should be quite useful, but note that filecase=4 (a more recent and better GAMS option for TIMES on Linux) can be strongly recommended instead of the filecase=2 used in that doc.
16-01-2025, 07:34 PM (This post was last modified: 16-01-2025, 07:35 PM by LucasRM.)
Hi all,
Apologies for not putting this up on the IEA-ETSAP forum where it belongs, but thank you for your responses regardless.
I used the documentation you provided @Victor, along with your notes @Antti-L, and ran my model succesfully.
Just as a small note, when running a stochastic model, 'times2veda_stc.vdd' should be used instead of 'times2veda.vdd', in the windows command for converting the gdx file (page 14 in the documentation).
It is probably rather self-explanatory, but I'll simply mention it here, as it isn't mentioned in the documentation.
could we continue this conversation here or on the other forum ? I am interested in your workflows.
Two years ago I felt the need to increase my number of simulation (veda parametric feature) which was limited by the ressources of my virtual machine. My colleagues help me to run TIMES on a linux server with ssh.
We also developped our own solution to handle the results. I found going back to veda to feed the postgresql database was cumbersome and I was looking for something more automated. A colleague of mine developped a script that create a sqlite database with the vd file inside (https://github.com/corralien/vd2db). Then you can query the database with standards data science software (python, power bi, power query, tableau, etc.).
In addition to this I had a look at the numerical aspects of my model, crossover was taking lot of time, I used jacobian analysis in veda to improve this. In my case, changing commodity units (eg from kt to Mt) reduced crossover time.
Lastly, I switched from cplex to gurobi which at the time was a bit faster and I found better documented.
(16-01-2025, 08:06 PM)VictorG Wrote: @StephTM and @LucasRM,
could we continue this conversation here or on the other forum ? I am interested in your workflows.
Two years ago I felt the need to increase my number of simulation (veda parametric feature) which was limited by the ressources of my virtual machine. My colleagues help me to run TIMES on a linux server with ssh.
We also developped our own solution to handle the results. I found going back to veda to feed the postgresql database was cumbersome and I was looking for something more automated. A colleague of mine developped a script that create a sqlite database with the vd file inside (https://github.com/corralien/vd2db). Then you can query the database with standards data science software (python, power bi, power query, tableau, etc.).
In addition to this I had a look at the numerical aspects of my model, crossover was taking lot of time, I used jacobian analysis in veda to improve this. In my case, changing commodity units (eg from kt to Mt) reduced crossover time.
Lastly, I switched from cplex to gurobi which at the time was a bit faster and I found better documented.
I would like to share my journey in handling results. During my PhD (almost three decades ago), I started writing database queries in FoxPro to process raw model output. Over the next ten years, everything I developed became part of Veda_BE. That approach worked for a while, but as models grew larger and scenario analysis became more complex, I needed a more scalable solution.
I then started processing VD files in SQL Server, with two key steps:
1. Creating aggregated variables - such as capacity, activity, and flows by technology, sector, or fuel - that are defined in a report definitions file - an Excel file with standard Veda filters for process/commodity/UC/TS etc.
2. Filtering and structuring views for easier analysis.
This processed data was visualized in VedaViz and LMA, which recently evolved into Veda Online. One major advantage of this approach is that it respects user-defined sets in the model, significantly reducing maintenance overhead.
Now, with Veda 2.0, all this experience has been integrated into the new Reports feature, which offers even more flexibility. While Veda 2.0 is not suitable (yet) for visualization of very large multi-region models with many scenarios, I believe its variable creation step can serve as a solid starting point for all result processing efforts - potentially saving a lot of work. It is not just the standard timeseries variables - I have also created efficient syntax to create data for Sankey charts, for example. A fully granular Sankey, which is quite useless for any real model, can be produced in two lines of this syntax. Aggregated Sankeys take only a few more lines to define.
@StephTM, you are quite familiar with the reports feature - what do you think?
And we are seriously considering bringing in browser-based visualization in Veda2.0.
PS: If everyone reinvents the wheel then who will build cars?