A bit of history on the project
About 2 years ago, the project was almost ready for the first alpha release. There was just one major complication: there was a bug somewhere that I just could not find after days of searching (note: debugging the code is a bit complex since pointers are not stored in the shared memory. The vds uses offsets and pointers are reconstructed on the fly).
Obviously, I would have found it if I had decided to continue but something was definitely fishy with the old code (I usually find bugs quite quickly). Because of this and because VDSF MUST be as bug free as possible (and even less than that) I've decided to do a complete rewrite with these four rules:
- Use design by contract to ensure the correctness of the arguments passed to each function and more (done using macros so that these checks can be removed in a production system).
- Rewrite the code in c (it was originally in c++). This was done to avoid future problems since many features of c++ are not supported when used with shared memory (virtual functions, for example). It was also done to eliminate one issue, the use of offsetof() is not permitted in c++ (although often tolerated at least by g++). Of course, c would also make it easier to transform the product in a kernel module eventually, if there is a demand.
- Add tests, tests and more tests (more than 550 currently (subversion revision 203)).
- Add some sort of signature to many of the data structures to be able to test not only that a pointer is not NULL but to make sure that it is the right type of pointer (pointers are often reconstructed on the fly from offsets - for these pointers the normal checks done by the compiler are useless).
As the rewrite was progressing, it became evident that a complete rewrite of the memory allocator was also neeeded to make sure that the project would be rebuild on a more solid foundation. The provious allocator was built on BGET - a very good allocator. But... it allocates the memory as needed, in small chunks. Navigating through all these small chunks to try to discover problems (a data structure partially overwritten, for example) is a bit of a nightmare.
A better solution was to allocate the memory in large chunks (like the memory pages of an operating system) for all the data containers and other large objects in shared memory. Alocation of small chunks (for example, adding items to an hash map) is done from within the large chunks (unless it is full - you then have to go back to the main memory allocator). This makes it a lot easier to debug (as most of the internal information of an object will be located together). It is however more complex.
Current status: the project is now back to where it was 2 years ago (better, I hope). The first alpha release occured at the end of November 2007.
Last updated on May 22, 2008.