VDSF long-term plan
Here's a list of medium and long term features for this project. Some of these features are already scheduled for the first official release (the version number for inclusion in the product are included in parenthesis). As for the other features, their inclusion in the product is somewhat speculative - it will depend, in part, by the eventual demands of those who will be using this software.
- Error recovery from program crashes at run-time (0.4).
- Error recovery from computer crashes (0.2 to 0.4)
- Support for as many OSes and hardware as possible (0.2 to 1.0), including embedded devices.
- Interface for many programming languages. The expected minimal subset of supported languages will include C (v 0.1), C++ (0.2 and 0.3), Java (0.3), C# (0.4), python (0.4), ruby (0.4), perl (0.4), tcl (0.4), VB (0.4), Ada (?), COBOL (?), Fortran (?), Lisp (?).
- Optional support for syncing the shared memory to disk on "demand" (possibly after every VDSF transaction). This only make sense when using solid state disks (most likely the more expensive RAM based SSDs - Flash SSDs are probably not well suited for applications doing thousands and thousands of transactions per second).
- Support for three basic (internal) types of data containers (linked lists, hash maps and t-trees). Of course, the API itself will offer many more types of data containers which will be derived from these basic types (t-trees will be included in 0.4).
- Data in the VDSF tree should be easy to browse. Plugins for the file browsers for the different OSes could be written for this although it might be easier to write plugins for the web browser instead. In other words, something like this will be done but the details are unknown yet.
- Generic module(s) to access the data from a web server, most probably as read-only data.
- Generic client-server software. A client-server that will initialize itself from a VDSF hash map.
- One or more generic monitoring programs (the data to monitor, the frequency of the monitoring, the method by which it reports its findings - all this can be put in a hash map so that it can be modified at run-time).
- Either a hash-map editor or one or more plugins to edit hash maps using existing editors.
- A higher-level interface (API) which will be able to define and access data definitions for the data containers. One possibility: this could be done (in part) by using a code generator: write the data definition and the generator write the code to access the data (including all the validations that might be needed).
- A plugin for eclipse, probably associated with this higher-level API.
- Event processing engine (for events generated by the data in the VDS). This could start with something simple - monitoring that the queues don't become quite large and generating events if they do. This could then be generalized to other stuff (inventory of some products getting low, abnormal levels of activity on a stock (for a stock-market server), etc.).
Last updated on March 21, 2008.