GitHub repo: rptools/assets-management

Thoughts, Help, Feature Requests, Bug Reports, Developing code for...

Moderators: dorpond, trevor, Azhrei

Forum rules
PLEASE don't post images of your entire desktop, attach entire campaign files when only a single file is needed, or generally act in some other anti-social behavior. :)
badsequel
Giant
Posts: 115
Joined: Thu May 31, 2012 3:13 am

GitHub repo: rptools/assets-management

Post by badsequel »

(This thread was split from viewtopic.php?f=3&t=23297 which discussed the new GitHub repo for RPTools. Craig was describing the repo and then moved into the new parser, now called "scripting-engine", and from there username offered to work on asset management. That other thread got too big so I've split it off here. The database concept was part of the asset management discussion (in a general way) so those posts are included here as well, although they are currently beyond the scope of username's work. If you want to discuss something in this thread which is a tangent, please create a new thread or -- at a minimum -- flag your new post as belonging in a separate thread and a moderator can split it.)

Craig wrote: Data storage
...Getting people to store large json like objects in tokens or characters works against the desire to only send changes between clients as one small change sends everything again. So there will need to be an API to some form of data storage, a database seems like a good option for this as well...

That got me thinking of a NoSql DB like MongDB..
Last edited by Azhrei on Thu Apr 04, 2013 6:13 pm, edited 1 time in total.
Reason: split from repo thread

User avatar
JML
Dragon
Posts: 515
Joined: Mon May 31, 2010 7:03 am
Location: Blagnac, France

Re: RPTools "next" Git Repository

Post by JML »

You made me take a quick look at existing NoSQL DBs. Also I don't have a clue about the pros and cons, I found those: db4o, ObjectDB, NeoDatis.

username
Dragon
Posts: 272
Joined: Sun Sep 04, 2011 7:01 am

Re: RPTools "next" Git Repository

Post by username »

Craig wrote:
username wrote:I wouldn't put this on OSGI. This is a problem for any plugin concept with persistent data. "Working" would mean that everything unknown is ignored.

Unfortunately ignoring everything that is unknown would not necessarily work, if someone implements the interface used for tokens etc to add some new effect, if that plug has bugs and you decide to remove it for your campaign, then you lose all your tokens.

I was thinking of something like XML: ignore all tags you don't know about. So if a plugin is removed, the corresponding tags are ignored. This way you don't lose anything besides the plugin. (Assuming we can enforce that plugins don't mess around in foreign tags and such.)
Craig wrote:...
Asset Manager
The asset manager needs to be updated so that it can handle assets other than just pictures, things that come to mind are sounds, movie, text, pdf? Not saying that all of these things will be supported off the bat, but it makes sense to plan ahead. Also instead of the asset manager being responsible for knowing how to fetch assets from everywhere this should all be delegated to handler classes which implement the correct interface. The asset manager just should manage the handlers and ask them to retrieve the info, cache what needs to be etc. There
will obviously need to be some way for these handlers to specify priority (or speed, or what ever you want to call it).
Handlers could be read only or read/write.

So the list of handlers that would need to be defined.
  • cache (read/write)
  • campaign (read/write)
  • Local repository (read/write)
  • Remote HTTP repository (read) (possibly write as scp/sftp/ftp)
  • rptools.net repository.

Not saying that I expect people to be writing plugins ala OSGi for this, but I think this is the best way to keep it extensible.

The asset manager and the handlers are probably big enough to break out from everything else, it also makes it easier for any other projects to reuse if if they want. Also breaking it out into its own repository should encourage people to think of it as a separate piece and only deal with it via an API and not rely on its internals (hey I can be optimistic can't I?).
...


Assuming I'd try to tackle this thing (probably because the way I'd currently like to use it, it's broken), I have a few questions.
* An asset for me is an object that is opaque for MT, ie. the system knows what to do with it; all we basically know is it's mime type and where to stuff it into.
* What is the MD5 thing good for; just a glorified version of creating an id? Currently it has caused me more pain than anything else.
* Any particular mandatory API? I guess any outcome will not fit into 1.3 anymore. (Or at least without additional pain.)
* Where do you want this to go? My GIT repo ok?

No claim staked or work accepted. It is a fallacy to assume I have too much time on my hands. It only seems that way this week. :wink:

User avatar
Azhrei
Site Admin
Posts: 12050
Joined: Mon Jun 12, 2006 1:20 pm
Location: Tampa, FL

Re: RPTools "next" Git Repository

Post by Azhrei »

username wrote:I was thinking of something like XML: ignore all tags you don't know about. So if a plugin is removed, the corresponding tags are ignored.

Just an aside... How do we know if an XML tag is unavailable? If the tags are implemented in classes that aren't loaded yet...?

username wrote:Assuming I'd try to tackle this thing (probably because the way I'd currently like to use it, it's broken), I have a few questions.
* An asset for me is an object that is opaque for MT, ie. the system knows what to do with it; all we basically know is it's mime type and where to stuff it into.

Correct. MT knows if the asset is resident locally or needs to be loaded from a repository (the index.gz from the repository is cached).

* What is the MD5 thing good for; just a glorified version of creating an id? Currently it has caused me more pain than anything else.

Yep, just an id. If you can think of a replacement that is fast to search on, easy to generate, and will be globally unique for a given image, then speak up and we can hash it out. :)

* Any particular mandatory API? I guess any outcome will not fit into 1.3 anymore. (Or at least without additional pain.)

Hm. I guess that will depend on how the code develops. I'd like to see an initial prototype of the code so that the API can be hashed out in terms of what will eventually be needed.

If we were to support multiple image formats, perhaps an animation format, perhaps vector-based formats (PostScript/PDF, SVG), and so on, what would the API look like? Ideally we would start with that and trim it back when implementing the package to what we need now.

1. Load an asset (requires the manager to look up its location).
2. Save an asset (for caching purposes or for saving a map/campaign/token).
3. Return an animation callback object (think ImageObserver or similar).
4. Convert asset into BufferedImage object.
5. Get VBL mask (could be vector- or raster-based).
6. Get MBL mask??

Those come to mind right away. Does the caller need to know about transparency in the image, or number of bits per pixel, or other metadata? If so we'll need a separate call that returns an AssetMetaData object.

Loading an asset needs to take into account the obvious sources like file:// and http:// but also drag-n-drop sources and pasting from the clipboard. The current MT implementation adds a URLStreamHandler for asset:// and that code will need an interface for loading the asset but I'm not sure what type of interface it requires. (I'm thinking just #1 and #4.)

User avatar
JamzTheMan
Great Wyrm
Posts: 1872
Joined: Mon May 10, 2010 12:59 pm
Location: Chicagoland
Contact:

Re: RPTools "next" Git Repository

Post by JamzTheMan »

Don't forget about sound assets :)

One thing that may be nice would be a no-cache asset tag of some sort. Especially for audio streams that would just need to stream and not cache (at least not permanently).

And if we could have "http" resources (like rolld20), i suppose it would cache but need to look for updates. It would be nice to be able to host something like a token database without needing to download the whole thing and not have to re download it again to pick up random changes.

I share my collection of assets regularly with my players who GM but we get out of sync fast.
-Jamz
____________________
Custom MapTool 1.4.x.x Fork: maptool.nerps.net
Custom TokenTool 2.0 Fork: tokentool.nerps.net
More information here: MapTool Nerps! Fork

username
Dragon
Posts: 272
Joined: Sun Sep 04, 2011 7:01 am

Re: RPTools "next" Git Repository

Post by username »

Before putting this up in some internet repo, I'd like to throw this out at people in order to see whether I am going in the right direction or have misunderstood something.
Attached is a small zip containing java source. Shot at will.
Attachments
AssetManDraft.zip
(4.32 KiB) Downloaded 338 times

Craig
Great Wyrm
Posts: 2107
Joined: Sun Jun 22, 2008 7:53 pm
Location: Melbourne, Australia

Re: RPTools "next" Git Repository

Post by Craig »

username wrote:Before putting this up in some internet repo, I'd like to throw this out at people in order to see whether I am going in the right direction or have misunderstood something.
Attached is a small zip containing java source. Shot at will.


I had a quick look at the code and it looks good, certainly in the direction I was thinking.
Things I would like to see added ( I know its only a prototype so there is a good chance most of this stuff you were planing to add, but at least if I list it we can discuss it).

  • A call back for progress, not important on local files but can be important for remote files (MapTool actually has a progress window for asset downloads in 1.3, I am not convinced it works well, but thats another story :) ).
  • Likewise for some long running asset fetches there should be a way to kill the fetch, some times on a slow network (for example the whole of China when my friend plays from there) MT currently gets stuck trying download a trillion (ok slight exaggeration) files and 1 byte per decade (this may also be an exaggeration). Some times it would be acceptable for him to just cancel a fetch and live with the broken image for the whole of the 5 minutes it will be used in the game.
  • A method in the AssetSupplier interface that returns if assets fetched from this supplier should be cached (for example images downloaded from the network should be cached, images from a local campaign file or local directory there is no need to cache).
  • The asset manager should automatically cache any asset coming from an AssetSupplier that states it should be cached. To this end you may want to change getInstance() to getInstance(WriteableAssetSupplier cache) and just use a Map to enforce a single instance per cache. Chances are there will probably almost always be only one but you never know. In this case its probably a good idea to have 2 interfaces AssetSupplier and WriteableAssetSupplier so the intention is clear.
  • There is an updated MD5Key in the rplib git repository, you should use that for the id of the asset. The reason md5 is used (well it could be any decent hashing function) is because if I have a jpeg thats in my repository folder and network repository and the rptools public repository they need to have the same ID, using md5 as an ID guarantees this, once they are saved to campaign file or cache etc they may be converted to PNG but they should keep the ID of the original jpeg (or you will just keep fetching it over and over again). Its fine if its not the whole key (e.g. you may prepend "image-" or "sound-" or "whatchyamacallit-" to the value returned by the md5) as long as the key doesn't change when you move it to cache (e.g. I wouldn't prepend it with "jpeg-" or "png-"). If the ID is bassed on file path or some semi random GUID when Joe/Joan Blow puts up his/her uber MT repository site none of the IDs they generate will match the IDs in the GMs campaign file so it will never be used.
  • There should also be a setPriority() as well as getPriority() to allow users to move them up and down. To support this there should also be a way to query which AssetSuppliers are available (this is also needed so that AssetSupplier locations can be passed along in the campaign). This also implies that there should be a factory for creating the standard AssetSuppliers, as well as a way to get a human readable label (e.g. URL(http://JoeBlow.com/assetmania) , Local Cache, Local Asset Library, etc...). Not all AssetSuppliers would be in the asset project (MT's Campaign AssetSupplier for example) but many would.
  • A nice "convenience" method that would take a WriteableAssetSupplier and a collection of asset id and go off in its own thread running through that list of assets fetching them and writing them to the WriteableAssetSupplier, this way the AssetManager can be used to easily create transportable asset "packs" that can be put on webserves, drop box, google drive, sugar sync, usb stick, what ever else. The progress call back should be used to update percetange of files done or whatever.

Now for the "coding standard" stuff, so hopefully we are all doing something near consistant :)
  • Where you have comments that say the method will fail if the argument is null, I prefer if you actually do the check for null and throw a null pointer exception
    e.g.

    Code: Select all

    if (newSupplier == null) {
       throw new NullPointerException(place description of problem here);
    }

    That at least puts the error right in front of anyone using your classes, rather than a non specific NullPointerException later on in the code (or worse in yet another called method). Sure it means a little more coding but a lot less hassle when someone makes a mistake.
    The fact that it throws a null pointer exception should also be documented with a @throws (but not added to the list of exceptions thrown on the method declaration).

    I think the code is going to grow large enough that people are not going to be able to know every nook and cranny so these sorts of defensive practices pay off.

    I also prefer throwing exceptions for publicly available API (public, protected) rather than asserts so the check can't be turned off. For private or package methods assert is fine. (we are not doing anything that is so performance critical here that we need to eliminate that code).
  • On a similar topic instead of returning and doing nothing in methods where there argument is invalid (such as you have in getAssetAsync() where listener is null) throw an exception (one of the standard ones if possible, in this case NullPointerException) so the error is caught when coding/testing rather than a head scratching why isn't my asset loading.
  • I am a strong believer of making things final unless they are designed to be extended, and limiting things to package access (default access) where not needed outside of the package (e.g. the AssetSuppliers dont need to be public classes as the factory will create them.

Ok so some other things that go into the "we need to think about" bucket.

At the moment the AssetManager fetches the asset converts it to an image and returns null if it cant do this. Really there are two separate conditions, the asset can't be found and the asset can't be loaded. These can be represented differently on the map so that users know which problem they are looking for.

I have been toying around with the idea of having a separate metadata file be carried around with the asset (just a small file with credits, license type (not the whole license just the name and url) and web page of creator. So when these things are loaded it can return an Asset<T> instead of the object itself which has this metadata, or even just a getAssetMetadata(...) method in AssetManager. Basically when the asset is fetched from the rptools website or some other third party repository someone has set up it grabs the asset file and if it exists the metadata file and caches them (if required), that way its possible for people to look up the credits and licenses for any images that are fetched this way. I know many of these file formats contain places to put metadata but I don't think its going to be a non messy solution to put all of this information in the file. This is certainly not a must have, but I would list it as a very nice to have.



I will create an assets in the rptools file with a pom.xml to start with some time in the next couple of hours. Then you can just fork this do your thang.

username
Dragon
Posts: 272
Joined: Sun Sep 04, 2011 7:01 am

Re: RPTools "next" Git Repository

Post by username »

[*] A method in the AssetSupplier interface that returns if assets fetched from this supplier should be cached (for example images downloaded from the network should be cached, images from a local campaign file or local directory there is no need to cache).

My intention was to implicitely cache via priorities. E.g. higher priorities decide whether to override/update, but the manager decides what to cache where. (Decision to be externalized in an algoritm class.) What did you have in mind for priorities?
[*] There is an updated MD5Key in the rplib git repository, you should use that for the id of the asset. The reason md5 is used (well it could be any decent hashing function) is because if I have a jpeg thats in my repository folder and network repository and the rptools public repository they need to have the same ID, using md5 as an ID guarantees this, once they are saved to campaign file or cache etc they may be converted to PNG but they should keep the ID of the original jpeg (or you will just keep fetching it over and over again). Its fine if its not the whole key (e.g. you may prepend "image-" or "sound-" or "whatchyamacallit-" to the value returned by the md5) as long as the key doesn't change when you move it to cache (e.g. I wouldn't prepend it with "jpeg-" or "png-"). If the ID is bassed on file path or some semi random GUID when Joe/Joan Blow puts up his/her uber MT repository site none of the IDs they generate will match the IDs in the GMs campaign file so it will never be used.

I strongly disagree on mixing ids and hashes. Mostly because it contraints external sources. External sources may be able to generate unique ids, but not the exact same hashes. This is currently broken in 1.3. I generate tokens from my own repository with an external script, which MT is seemingly able to import nicely. Next time I start the campaign it freezes MT. Last thing I see is that the id doesn't match the hash. (And it takes some MT-fu to recover the campaign too.)

I propose to have the uniqueness check be part of the handlers. I.e., if a new asset is added, it scans via md5 (or other mechanism), whether it is aleady present. It should act accordingly.
Now for the "coding standard" stuff, so hopefully we are all doing something near consistant :)

You could provide some style package for eclipse. Or run some checker in sonar, should we create a continous build. If you feel strongly about these rules, this should be automated. At least I work in different projects with different philosophies in that respect and it is easy to get confused. Personally, I'm of the "Don't complain, do something about it" school of development.

Craig
Great Wyrm
Posts: 2107
Joined: Sun Jun 22, 2008 7:53 pm
Location: Melbourne, Australia

Re: RPTools "next" Git Repository

Post by Craig »

username wrote:My intention was to implicitely cache via priorities. E.g. higher priorities decide whether to override/update, but the manager decides what to cache where. (Decision to be externalized in an algoritm class.) What did you have in mind for priorities?


What I was thinking for the priorities was they represent how expensive it is to get the asset from the location. E.g. you will grab an asset from the cache or local asset library or campaign file before you try grab it from a network repository which you would grab it from before the MT server. Caching is separate from this, as some local sources wont need to be cached, or only hashes cached and the location of the real file (kinda like what happens now with stuff in your asset library, but I am by no means saying it should work exactly that way).

username wrote:I strongly disagree on mixing ids and hashes. Mostly because it contraints external sources. External sources may be able to generate unique ids, but not the exact same hashes. This is currently broken in 1.3. I generate tokens from my own repository with an external script, which MT is seemingly able to import nicely. Next time I start the campaign it freezes MT. Last thing I see is that the id doesn't match the hash. (And it takes some MT-fu to recover the campaign too.)


There is no requirement to implement current bugs btw :)

Ok so here is the issue if I play with more than one group they will often have the same images, be they from the large 4gig repo that was floating around, the RPTools web albums, or many other places. Ideally I don't want to download these from a repo (or worse still MT server) if they are already on my disk, I also don't want to end up with 4 or 5 versions of many of them in my cache on top of the ones I already have on my disk. Using a well known hashing algorithm like md5 for and id for the most part solves this problem (if some one converts all the jpegs to pngs there is really nothing you can do).


username wrote:I propose to have the uniqueness check be part of the handlers. I.e., if a new asset is added, it scans via md5 (or other mechanism), whether it is aleady present. It should act accordingly.

This could work, but it would have to be either part of the index for the list of files, or the small metadata file that lives with the file (i.e. there is no use retrieving the file to calculate the md5 of the file to check if you already have it). The one question I do have though is what does the ID represent? Because it still seems that you are still using the md5 hash as the true ID of the object. Or did you have some other thoughts on how you were going to achieve this?

username wrote:
Now for the "coding standard" stuff, so hopefully we are all doing something near consistant :)

You could provide some style package for eclipse.

Given that we are moving away from requiring people to use eclipse thats not a great idea (I also don't use eclipse so someone else would have to provide that).

username wrote:Or run some checker in sonar, should we create a continous build.

It would be nice, in all honesty I would really like to have a continuos build system, but then I look at the requirements for even a small to mid size project, and speak to a few people I know who are using them to make sure said requirements are not just completely overstated, then do a search for hosted VM environments with > 512m memory and at this point I find the cost kinda hard to justify. The next option is I run it on some machine I find lying around at home but thats not necessarily going to be reliable, something more like sporadic build system :)

username wrote:If you feel strongly about these rules, this should be automated. At least I work in different projects with different philosophies in that respect and it is easy to get confused. Personally, I'm of the "Don't complain, do something about it" school of development.

Just letting you know the detail, not complaining, and by doing that I "have done something about it" :) Surely you were not expecting to be able to contribute to an open source project without following some standards? Now I am more than willing to entertain more automated solutions the will help with the enforcing/flagging of rules but if they require everyone using eclipse or having to throw a chunk of money at it, then its hard to get past entertaining to doing.

Craig
Great Wyrm
Posts: 2107
Joined: Sun Jun 22, 2008 7:53 pm
Location: Melbourne, Australia

Re: RPTools "next" Git Repository

Post by Craig »

Created the repo
Assets Management
https://github.com/RPTools/asset-management


I decided to go with assets management instead of assets as future me was so distraught about having to answer so many questions about why assets don't go in that repo many years from now that he actually created a time machine to come back and tell me off for being so stupid.

username
Dragon
Posts: 272
Joined: Sun Sep 04, 2011 7:01 am

Re: RPTools "next" Git Repository

Post by username »

Craig wrote:What I was thinking for the priorities was they represent how expensive it is to get the asset from the location. E.g. you will grab an asset from the cache or local asset library or campaign file before you try grab it from a network repository which you would grab it from before the MT server. Caching is separate from this, as some local sources wont need to be cached, or only hashes cached and the location of the real file (kinda like what happens now with stuff in your asset library, but I am by no means saying it should work exactly that way).

I was thinking Prio 1 = mem cache, 2 = disk cache, 3 = ...
Ok so here is the issue if I play with more than one group they will often have the same images, be they from the large 4gig repo that was floating around, the RPTools web albums, or many other places. Ideally I don't want to download these from a repo (or worse still MT server) if they are already on my disk, I also don't want to end up with 4 or 5 versions of many of them in my cache on top of the ones I already have on my disk. Using a well known hashing algorithm like md5 for and id for the most part solves this problem (if some one converts all the jpegs to pngs there is really nothing you can do).
[...]
This could work, but it would have to be either part of the index for the list of files, or the small metadata file that lives with the file (i.e. there is no use retrieving the file to calculate the md5 of the file to check if you already have it). The one question I do have though is what does the ID represent? Because it still seems that you are still using the md5 hash as the true ID of the object. Or did you have some other thoughts on how you were going to achieve this?

Use md5 for checking equality but something else for referencing. This also allows us to update assets (implictly). The id could be anything. E.g. origin URL is okay. Update problems are with the individual handlers.
It would be nice, in all honesty I would really like to have a continuos build system, but then I look at the requirements for even a small to mid size project, and speak to a few people I know who are using them to make sure said requirements are not just completely overstated, then do a search for hosted VM environments with > 512m memory and at this point I find the cost kinda hard to justify. The next option is I run it on some machine I find lying around at home but thats not necessarily going to be reliable, something more like sporadic build system :)

I haven't checked all requirements, but I can build the linux piece in less than 5 minutes on my 5-year old. Cloudbee offers 2000min/month free. That's more than an hour/day. (I don't see that they count traffic as well.) Is there anything in their conditions you mistrust? (Maybe all the "be our PR agent" type statements?)
Surely you were not expecting to be able to contribute to an open source project without following some standards? Now I am more than willing to entertain more automated solutions the will help with the enforcing/flagging of rules but if they require everyone using eclipse or having to throw a chunk of money at it, then its hard to get past entertaining to doing.

I have been using sonar in jenkins and plugins before. They do not require eclipse, but someone to set it up - manpower shortage again, I guess. Unless you are all out of the box. To be honest, unless someone is interested in this kind of work, skip it. Bottom line for me is, that I am prone to forget style guidelines when they get in the way of my interpretation of readible code. (Unintentionally, really! Just like in natural languages, you're prone to usin' da dialeect ur born wid.) That's were a spell checker came in handy.

Craig
Great Wyrm
Posts: 2107
Joined: Sun Jun 22, 2008 7:53 pm
Location: Melbourne, Australia

Re: RPTools "next" Git Repository

Post by Craig »

username wrote:I was thinking Prio 1 = mem cache, 2 = disk cache, 3 = ...

I guess as long as their is a way for users to order the asset suppliers they add for their own repositories then I am OK with priority 1 is memcache and 2 is disk cache, I still think the supplier should advise if assets coming from it should be cached (to disk that is) or not as the asset manager is not going to know about all types of sources it is supplied and some may be just as fast as the disk cache

username wrote:Use md5 for checking equality but something else for referencing. This also allows us to update assets (implictly). The id could be anything. E.g. origin URL is okay. Update problems are with the individual handlers.

So are you planing to have the md5 as part of the repository index or in a small meta data file so that we don't have to download the file to see if we already have it? If going with an id instead of md5 then I think I prefer something like the guid (i can add that class to rplib) currently used for ids of other things (e.g. tokens) to be used for things like locally indexed files, generated assets where URL is not available to reduce collision chance (it probably a good chance of collision on something like c:\maptool\images\orc.png and we want to encourage sharing campaigns) I guess what I am saying is its OK for a remote source to supply its own id but we might as well reuse the other code used elsewhere for RPTools generated ids.


username wrote:I haven't checked all requirements, but I can build the linux piece in less than 5 minutes on my 5-year old. Cloudbee offers 2000min/month free. That's more than an hour/day. (I don't see that they count traffic as well.) Is there anything in their conditions you mistrust? (Maybe all the "be our PR agent" type statements?)


Actually I handnt noticed the open source link down the bottom (maybe its time for some corrective eyewear ) :-) after looking at it I wouldn't say I mistrust the conditions, but considering I don't even have or want a face book account...

Craig
Great Wyrm
Posts: 2107
Joined: Sun Jun 22, 2008 7:53 pm
Location: Melbourne, Australia

Re: RPTools "next" Git Repository

Post by Craig »

Craig wrote:

Code: Select all

if (newSupplier == null) {
   throw new NullPointerException(place description of problem here);
}


Just for anyone interested I just found out that Java 7 has an Objects class which has convenience methods for checking for null and throwing exceptions e.g.

Code: Select all

   this.name = Obejcts.requireNonNull(name, "Name can not be null.");

I don't recall seeing that on any of the new features in Java 7 articles (but that might just be due to getting older :) ).

Lee
Dragon
Posts: 958
Joined: Wed Oct 19, 2011 2:07 am

Re: RPTools "next" Git Repository

Post by Lee »

Very interesting stuff. Thanks Craig and company!

Craig
Great Wyrm
Posts: 2107
Joined: Sun Jun 22, 2008 7:53 pm
Location: Melbourne, Australia

Re: RPTools "next" Git Repository

Post by Craig »

badsequel wrote:
Craig wrote:That got me thinking of a NoSql DB like MongDB..


JML wrote:You made me take a quick look at existing NoSQL DBs. Also I don't have a clue about the pros and cons, I found those: db4o, ObjectDB, NeoDatis.



MongoDB unfortunately can't be embedded in a Java process.

Purely object based databases like db4o, ObjectDB, and NeoDatis, really aren't too good a fit, as that would just push the problem down to the database level which would be even more inefficient.

What is needed is something thats more of a document based NoSQL database, or I guess even a graph one could work. It would also need to embeddable in a Java process (getting people to install/setup/run a db server is not really viable :) ), and of course the license has to be compatible.

Something like OrientDB looks like it would be a ok fit, but I am hesitant to invest the time trying to get it working due to lack of decent documentation and how clunky the embedded config looks.

Then again I am not convinced that a relational database isn't just a good a fit, mind you after many many many years of working with relational databases my thinking could just be skewed :)

Post Reply

Return to “MapTool”