Veda2.0 Released!


Running TIMES model on linux HPC cluster
#1
Dear forum members, 

As the title suggests, I have been trying to figure out how, and if, you can run a TIMES model on a Linux server, on which GAMS is already installed. I've moved the GAMS_SrcTIMES.v.4.8.1 and GAMS_WrkTIMES folders onto the server, and wanted to test if running a vtrun file from one of the models within GAMS_WrkTIMES, was possible. 

Since I am on Windows, my vtrun file is a .cmd file, which naturally doesn't run on Linux.

Does anyone have any experience with the feasibility of the above, and/or a way to convert the .cmd file to a executable .sh file? I have attached a .txt version of the vtrun file, if that's in any way helpful. 

I have been looking around on the forum and haven't found a similar question, but if this has already been answered before, let me know.

Thank you, 
Lucas


Attached Files
.txt   vtrun_example.txt (Size: 394 bytes / Downloads: 3)
Reply
#2
This is the VEDA Forum and VEDA works only on Windows.

TIMES can be run "manually" (without VEDA) both under Windows and on Linux. I have myself no experience on running it on Linux, but one should be able to run the TIMES Demo easily on Linux (just with the command gams rundemo) see:  TIMES Demo.

Running TIMES directly with GAMS is very simple at a console window.  And so, if you are a Linux user, it should be easy for you to make a shell script for it.  But anyway, the rundemo.gms file of the TIMES demo shows the command line both for running TIMES with GAMS and for running gdx2veda for producing the result *.vd* files.  One important aspect for Linux usage is adding the option filecase=4 into the GAMS command line.

Example command for running a TIMES case "casename" with GAMS (assuming the DD files are in the current dir):
gams casename.run idir1=../timesv484 ps=0 gdx=casename filecase=4 r=../timesv484/_times.g00

The restart file (r=...) can be omitted if you have a full GAMS license.
Reply
#3
Hi all,

Thank you Antti for the hints. Evangelos @PSI, with colleagues from UCC, has already done similar things.
We are currently in the same process in my group, so I would be happy to have a bi/multilateral exchange (this is indeed the VEDA forum here). Also, and in case it helps, we are installing WSL on our Windows server, so we can prototype what we need before going to the shared HPC infra.

Best,
Stéphane
Reply
#4
Documentation on an ETSAP project on TIMES and HPC, running TIMES on Linux is explained:

https://iea-etsap.org/projects/TIMES-HPC_final.pdf
Reply
#5
@Victor:
Thanks, that doc should be quite useful, but note that filecase=4 (a more recent and better GAMS option for TIMES on Linux) can be strongly recommended instead of the filecase=2 used in that doc.
Reply
#6
Hi all,

Apologies for not putting this up on the IEA-ETSAP forum where it belongs, but thank you for your responses regardless.

I used the documentation you provided @Victor, along with your notes @Antti-L, and ran my model succesfully.

Just as a small note, when running a stochastic model, 'times2veda_stc.vdd' should be used instead of 'times2veda.vdd', in the windows command for converting the gdx file (page 14 in the documentation).

It is probably rather self-explanatory, but I'll simply mention it here, as it isn't mentioned in the documentation.
Reply
#7
@StephTM and @LucasRM,

could we continue this conversation here or on the other forum ? I am interested in your workflows.

Two years ago I felt the need to increase my number of simulation (veda parametric feature) which was limited by the ressources of my virtual machine. My colleagues help me to run TIMES on a linux server with ssh.

We also developped our own solution to handle the results. I found going back to veda to feed the postgresql database was cumbersome and I was looking for something more automated. A colleague of mine developped a script that create a sqlite database with the vd file inside (https://github.com/corralien/vd2db). Then you can query the database with standards data science software (python, power bi, power query, tableau, etc.).

In addition to this I had a look at the numerical aspects of my model, crossover was taking lot of time, I used jacobian analysis in veda to improve this. In my case, changing commodity units (eg from kt to Mt) reduced crossover time.

Lastly, I switched from cplex to gurobi which at the time was a bit faster and I found better documented.
Reply
#8
(16-01-2025, 08:06 PM)VictorG Wrote: @StephTM and @LucasRM,

could we continue this conversation here or on the other forum ? I am interested in your workflows.

Two years ago I felt the need to increase my number of simulation (veda parametric feature) which was limited by the ressources of my virtual machine. My colleagues help me to run TIMES on a linux server with ssh.

We also developped our own solution to handle the results. I found going back to veda to feed the postgresql database was cumbersome and I was looking for something more automated. A colleague of mine developped a script that create a sqlite database with the vd file inside (https://github.com/corralien/vd2db). Then you can query the database with standards data science software (python, power bi, power query, tableau, etc.).

In addition to this I had a look at the numerical aspects of my model, crossover was taking lot of time, I used jacobian analysis in veda to improve this. In my case, changing commodity units (eg from kt to Mt) reduced crossover time.

Lastly, I switched from cplex to gurobi which at the time was a bit faster and I found better documented.

I would like to share my journey in handling results. During my PhD (almost three decades ago), I started writing database queries in FoxPro to process raw model output. Over the next ten years, everything I developed became part of Veda_BE. That approach worked for a while, but as models grew larger and scenario analysis became more complex, I needed a more scalable solution.

I then started processing VD files in SQL Server, with two key steps:

1. Creating aggregated variables - such as capacity, activity, and flows by technology, sector, or fuel - that are defined in a report definitions file  - an Excel file with standard Veda filters for process/commodity/UC/TS etc.
2. Filtering and structuring views for easier analysis.

This processed data was visualized in VedaViz and LMA, which recently evolved into Veda Online. One major advantage of this approach is that it respects user-defined sets in the model, significantly reducing maintenance overhead.

Now, with Veda 2.0, all this experience has been integrated into the new Reports feature, which offers even more flexibility. While Veda 2.0 is not suitable (yet) for visualization of very large multi-region models with many scenarios, I believe its variable creation step can serve as a solid starting point for all result processing efforts - potentially saving a lot of work. It is not just the standard timeseries variables - I have also created efficient syntax to create data for Sankey charts, for example. A fully granular Sankey, which is quite useless for any real model, can be produced in two lines of this syntax. Aggregated Sankeys take only a few more lines to define.

@StephTM, you are quite familiar with the reports feature - what do you think?

And we are seriously considering bringing in browser-based visualization in Veda2.0.

PS: If everyone reinvents the wheel then who will build cars? 
Reply


Possibly Related Threads…
Thread Author Replies Views Last Post
  TIMES-Macro-MSA and gdx behaviour Enya 10 522 17-04-2025, 05:56 PM
Last Post: Antti-L
  meaning of NCAP in a simple test model Enya 2 138 02-04-2025, 03:33 PM
Last Post: Antti-L
  TIMES-Macro-MSA MSADDF.DD file UNDF Enya 7 820 16-12-2024, 06:06 PM
Last Post: Enya
  Demo model on CCS [email protected] 0 208 12-12-2024, 07:35 PM
Last Post: [email protected]
  A question about EU-TIMES [email protected] 1 375 19-08-2024, 12:07 AM
Last Post: Antti-L
  Unexpected error in model run BSR 2 693 17-07-2024, 07:07 PM
Last Post: BSR
  One question about EU-TIMES seanli12354 0 449 09-06-2024, 06:54 PM
Last Post: seanli12354
  An error when I read JRC-EU-TIMES Lee 7 2,533 03-06-2024, 05:28 PM
Last Post: Lee
  One question of EU-TIMES: CO2 emissions for gas/oil production/transmission process [email protected] 1 638 30-05-2024, 02:58 PM
Last Post: Antti-L
  Model geographical expansion stevenss 0 455 17-05-2024, 02:36 PM
Last Post: stevenss

Forum Jump:


Users browsing this thread: 1 Guest(s)