I am getting error 3049 when trying to import results to VEDA-BE. Some searching and reading this post ([/url][url=https://forum.kanors-emr.org/showthread.php?tid=711]https://forum.kanors-emr.org/showthread.php?tid=711) i figured the problem was the size of the result file. The result is from stochastic-spines run with more than 10 sow. The VD file created is about 1,5 GB, and a temporary .MDB is 2 GB when the error arises. Clicking OK gets me to another error, Microsoft JET Database Engine 2147467259. Continuing to click OK through all the (almost same) errors the .MDB-file is gone and nothing is imported to VEDA-BE. A selection of the three first errors is attached.
Due to the .MDB-file missing there is nothing to compact and repair. The LST-file shows normal completion with optimal solution found.
How can i have a look at my results? Is there any tricks and tips to work around access limit of 2 GB here?
15-05-2020, 12:35 PM (This post was last modified: 15-05-2020, 12:36 PM by AKanudia.)
This was one of the important reasons for new Veda - the 2GB limit of ACCESS databases. The only option is to reduce the size of the VD file. First, you can control the reporting level of flow variables. You can also suppress some attributes by editing the VDD file.
Veda2.0 has a fully functional front end now, but the reporting side is still under development.
Thank you for the clarification Amit. In this context, I have two questions:
- To Kanors: When do you expect that the beta-version of the new VEDA-BE will be available for ETSAP-partners?
- To all: What is your best tips on reducing the size of the VD.file? How have you solved this issue? Where are these options described?
We wish to keep the detail level of the model, but we can for example reduce the amount of variables or model periods that are reported in the vd.file. I assume we can include some options in the GEN file using $ SET.
Please use the New Reply button (not just the reply button) to answer this post.
15-05-2020, 01:57 PM (This post was last modified: 15-05-2020, 01:58 PM by canismajoris.)
In the light of your second question Pernille, I am facing the same problem with large models, and I am waiting for the update. Meanwhile, I am developing some R scripts that help me process large models. Still under developmennt but helps a lot.
I am also facing the same problem. After completion of the simulation, when VEDA tries to create a database to save the results, this error appears. I tried to resolve this by controlling the reporting level of flow variables.
Thank you for providing this solution.
Still, for some simulation, this appears.
I am not aware of the second solution provided by you that how to implement it - You can also suppress some attributes by editing the VDD file.
For a SPINES run, I think you could also split the large VD file into smaller files, such that each SOW becomes a separate scenario. You could just use some text file utility for doing that (or even GAMS), such that you write out all the records to different files according to the SOW index, and also append the scenario name with the SOW index in the header line (ImportID- Scenario:). Then you can import all those smaller *.VD files into VEDA-BE, which creates an MDB file for each scenario, and so you will no longer hit the 2 GB limit, because each SOW then becomes a separate scenario in BE.
I just tested that myself, using GAMS for splitting a 1.6 GB VD file according to the SOW index, and it worked well.
18-05-2020, 11:24 AM (This post was last modified: 18-05-2020, 11:49 AM by Pernille.S.)
(18-05-2020, 12:55 AM)Antti-L Wrote: For a SPINES run, I think you could also split the large VD file into smaller files, such that each SOW becomes a separate scenario. You could just use some text file utility for doing that (or even GAMS), such that you write out all the records to different files according to the SOW index, and also append the scenario name with the SOW index in the header line (ImportID- Scenario:). Then you can import all those smaller *.VD files into VEDA-BE, which creates an MDB file for each scenario, and so you will no longer hit the 2 GB limit, because each SOW then becomes a separate scenario in BE.
I just tested that myself, using GAMS for splitting a 1.6 GB VD file according to the SOW index, and it worked well.
Thank you, this seems like a reasonable approach! We will try to do this!
When using SPINES, the capacity is the same for all SOW. Is it possible to change code such that it is only reported capacity parameters for one of the SOW when using SPINES? This would decrease the vd.-file and make the post-processing more easy!
I had a look at all the capacity-related results attributes, and it seems they make only about 10% of the VD file. Therefore, I wouldn't expect substantial decrease from reporting them only for one SOW.
In case you interested, please find attached my test GAMS routine for splitting the VD file.
Thank you for your answers. I managed to split each SOW in separate VD-files (using python) and read them into VEDA-BE. I can now look at the results. It creates a lot of scenarios and double info, so it is messy, but it is a solution till the new VEDA-BE is ready (when will that be?).