FEATURE: MapTool Database

Developer discussion regarding MapTool 1.4

Moderators: dorpond, trevor, Azhrei

Forum rules
Posting now open to all registered forum users.
User avatar
aliasmask
RPTools Team
Posts: 9029
Joined: Tue Nov 10, 2009 6:11 pm
Location: Bay Area

FEATURE: MapTool Database

Post by aliasmask »

Any plans to include sql access. This really solves my issue with storing player related data. My old way was to create a lib token for each player.

User avatar
RPTroll
TheBard
Posts: 3159
Joined: Tue Mar 21, 2006 7:26 pm
Location: Austin, Tx
Contact:

Re: Preview build for 1.4.somethingorother

Post by RPTroll »

aliasmask wrote:Any plans to include sql access. This really solves my issue with storing player related data. My old way was to create a lib token for each player.
I would love this as well. Even a nosql database would be good.

Oh, and this is awesome, Craig. Thanks for putting this out.
ImageImage ImageImageImageImage
Support RPTools by shopping
Image
Image

User avatar
JamzTheMan
Great Wyrm
Posts: 1872
Joined: Mon May 10, 2010 12:59 pm
Location: Chicagoland
Contact:

Re: Preview build for 1.4.somethingorother

Post by JamzTheMan »

RPTroll wrote:
aliasmask wrote:Any plans to include sql access. This really solves my issue with storing player related data. My old way was to create a lib token for each player.
I would love this as well. Even a nosql database would be good.

Oh, and this is awesome, Craig. Thanks for putting this out.
I was taking a look at this last year a little. There is actually a java db kinda that just uses flatfiles IIRC. We really dont need anything to fancy and it could allow non techy users mass import data eaiser. I really need to get my github build working.... :roll:
-Jamz
____________________
Custom MapTool 1.4.x.x Fork: maptool.nerps.net
Custom TokenTool 2.0 Fork: tokentool.nerps.net
More information here: MapTool Nerps! Fork

User avatar
RPTroll
TheBard
Posts: 3159
Joined: Tue Mar 21, 2006 7:26 pm
Location: Austin, Tx
Contact:

Re: Preview build for 1.4.somethingorother

Post by RPTroll »

A DB for import and custom tables would be great. There are lots of small, highly efficient ones to pick from.
ImageImage ImageImageImageImage
Support RPTools by shopping
Image
Image

User avatar
JamzTheMan
Great Wyrm
Posts: 1872
Joined: Mon May 10, 2010 12:59 pm
Location: Chicagoland
Contact:

Re: Preview build for 1.4.somethingorother

Post by JamzTheMan »

This was what I was referring to, http://www.oracle.com/technetwork/java/ ... index.html

It's small and embedded in the JDK. Any reason to mess with a third party solution?

I'd like to expose some XML based macros for some query and manipulation of data. JSON is better but sometimes you have to deal with external data, like HeroLabs for instance.
-Jamz
____________________
Custom MapTool 1.4.x.x Fork: maptool.nerps.net
Custom TokenTool 2.0 Fork: tokentool.nerps.net
More information here: MapTool Nerps! Fork

User avatar
RPTroll
TheBard
Posts: 3159
Joined: Tue Mar 21, 2006 7:26 pm
Location: Austin, Tx
Contact:

Re: Preview build for 1.4.somethingorother

Post by RPTroll »

I didn't realize they embedded Derby into core Java. Shiney
ImageImage ImageImageImageImage
Support RPTools by shopping
Image
Image

Craig
Great Wyrm
Posts: 2107
Joined: Sun Jun 22, 2008 7:53 pm
Location: Melbourne, Australia

Re: Preview build for 1.4.somethingorother

Post by Craig »

RPTroll wrote:
aliasmask wrote:Any plans to include sql access. This really solves my issue with storing player related data. My old way was to create a lib token for each player.
I would love this as well. Even a nosql database would be good.

Oh, and this is awesome, Craig. Thanks for putting this out.
Hmm Databases...
I want to, the trick is how to deal with them properly.
One thing I want to do is instead of a campaign file have a "campaign directory". This has the advantage of being easier to deal with once things get large for a whole variety of reasons (not the least of which is backups and syncing with cloud storage). But you can also put a database in a directory a LOT easier than you can put one in a zip file. It can also contain a sandboxed area for macros to read/write things (and by things I mean text not binary, and maybe a warning if a macro is trying to write too much).
I also have this Utopian dream where macros would live in their own files (or collection of files) in this world and you could use any editor you want to edit them and MapTool would detect a change to the macro and reload it -- how feasible this is I am not sure -- this is not to say that an improved editor within MapTool would not be great too. Anyway I digress back to databases for now.

Below are my thoughts I would be interested to hear others.
There are three different ways to approach this, one the database is external to MapTool (I really don't like this), two the database is only on the server and each query -- or update for that matter -- of the database from a client makes a call to the server which access the database, or the third there is a copy of the database on all "fat" clients. The second method is certainly the easiest although it does of course add a round trip to any query, if you have a dedicated server with a fast connection this is of course not that much of an issue. While the third method solves the round trip but adds the issue of keeping data across the databases in sync, obviously we cant go with a similar method to what we have now, not sure I want to send a whole table when ever a column is updated. I would also not want to even contemplate a database supported replication method, as those are designed for load balancing and failover and require fairly complex setup steps -- and you think people don't like port forwarding!...

Not sure which of the two methods I prefer really... Maybe a a hybrid approach where data that doesnt change often and that the clients can't change (e.g. the equivalent of MapTool lookup tables) can safely be cached on the client side and if its updated a whole download of that subset needs to be performed (as well as a check and download on connection). More volatile data lives only where the campaign directory is loaded. If we want to go with caching more volatile data then we have to keep a track of when each row in a table is updated and a change number so when a client first connects it can say my latest change in my cache (in the .maptool dir) is X give me everything after X, because you certainly wouldnt want to refresh the whole cache each connection. I am also in favor of moving most things into the database and not having to carry them around in memory all the time, although this does mean a big change for autosave/campaign backups.

User avatar
RPTroll
TheBard
Posts: 3159
Joined: Tue Mar 21, 2006 7:26 pm
Location: Austin, Tx
Contact:

Re: Preview build for 1.4.somethingorother

Post by RPTroll »

Craig wrote: I also have this Utopian dream where macros would live in their own files (or collection of files) in this world and you could use any editor you want to edit them and MapTool would detect a change to the macro and reload it -- how feasible this is I am not sure -- this is not to say that an improved editor within MapTool would not be great too.
OMG YES!!!! I'd take a refresh button in the beginning for the reload. I'd recommend that non-player macros only exist on the server or exist in a spot that players can't access. File locking?



Craig wrote: Hmm Databases...
Below are my thoughts I would be interested to hear others.
There are three different ways to approach this, one the database is external to MapTool (I really don't like this), two the database is only on the server and each query -- or update for that matter -- of the database from a client makes a call to the server which access the database, or the third there is a copy of the database on all "fat" clients. The second method is certainly the easiest although it does of course add a round trip to any query, if you have a dedicated server with a fast connection this is of course not that much of an issue. While the third method solves the round trip but adds the issue of keeping data across the databases in sync, obviously we cant go with a similar method to what we have now, not sure I want to send a whole table when ever a column is updated. I would also not want to even contemplate a database supported replication method, as those are designed for load balancing and failover and require fairly complex setup steps -- and you think people don't like port forwarding!...
From a coding and data sync standpoint, option two is better although there may be one way replications that aren't heinous. Thus queries would always be local but updates would be to the server. I'll do some digging to see if such capability exists in the opensource/free wilds. In this case you'd need two instances (if I'm thinking through this properly). One would be the replicant of the server while the other would be local and not get overwritten on a connect to server. The complexity likely won't justify the implementation but I'll still check. But, if we isolate the data access layer well enough we can always swap out later.

Awesome stuff, Craig.


Edit: 03/10/2015 - A few trollish thoughts.
I'm wondering if character tokens make a natural place to cache character data that's represented in a database. Right now we have this as a Basic property
Strength (Str)
Dexterity (Dex)
Constitution (Con)
Intelligence (Int)
Wisdom (Wis)
Charisma (Char)
*@HP
*@AC
Defense (Def)
Movement (Mov)
*Elevation (Elv)
Description (Des)

Where *,#,(), and : are used as codes for different things. It would seem we have the basis of a table definition on the token now. Mechanically, changes to these could be loaded into a database on some sort of lazy save, or queued in a message buffer. Tangent to that, a message queue could be used (assured delivery, collision handling) and abstracted for other use inside the framework.
Last edited by RPTroll on Tue Mar 10, 2015 9:40 am, edited 1 time in total.
ImageImage ImageImageImageImage
Support RPTools by shopping
Image
Image

User avatar
Azhrei
Site Admin
Posts: 12086
Joined: Mon Jun 12, 2006 1:20 pm
Location: Tampa, FL

Re: Preview build for 1.4.somethingorother

Post by Azhrei »

One option is something like these sites:

http://www.freesqldatabase.com/freemysqldatabase/
http://www.freemysqlhosting.net/
https://mongolab.com/plans/
http://haphost.com/free-mysql-hosting/ (not taking new accounts right now)

where individual gaming groups could sign up for an account and then all MT clients would access the database remotely. If the connection characteristics were in the campaign and shared with all clients, there would be no configuration needed on each client. The above links are to free, remote database access, but if you exceed the sizes of the free accounts they can be seamlessly upgraded. (The MongoDB site offers 500MB of free database space, 100x more than the others!)

Another option would be a thin RESTful layer built in PHP that accesses a non-public database. This is the way most web hosting sites work; the database access they provide is to private servers that are not accessible from the 'net, hence the need for the interface layer. This will be slower, obviously, because of the extra layer but could be fairly easy to implement.

(Btw, I was looking for some type of tunneling software the other day for MT. Something that allows a single port to be opened in a router, but would allow MT to pass multiple types of traffic over it with some kind of prioritization as well. Haven't found the perfect solution yet.)

User avatar
JamzTheMan
Great Wyrm
Posts: 1872
Joined: Mon May 10, 2010 12:59 pm
Location: Chicagoland
Contact:

Re: Preview build for 1.4.somethingorother

Post by JamzTheMan »

Not to beat a dead horse but derby has several options for replicating and may be able to leverage one.

First question is, what do we want a DB for and what will go in it? I see to main uses:

1. A more robost "table" (durrr) to store large sets of game data, eg Spell data, feats, gun stats, etc. A subset of quicker more accessed data like combat conditions/modifiers.

2. A store everything model, token data gets shunted over to DB.


As to not fall into the same habit we've had for the last 5 years, lets NOT tackle #2 if it's going to add another 2 years of talk/coding when we can do #1 in a much quicker timeframe.

I suggest picking an easy, quick solution, to do a proof of concept, see what works, what are the pitfalls and providing the shell needed for #1 (macro functions, etc). It doesn't have to be perfect, it can be yanked and replaced. Something is better than nothing.

I would think a "remote" master DB (and possibly cloned locally) for static data would suffice. A small delay in looking up, say a spell, remotely would be acceptable, it's reference data only. And at the same time this sort of data rarely changes (almost never during a game). Other data (using LW's framework as an example) uses condition and senses data heavily in macros so a fast sql lookup is required (would it be faster than the current table lookup?) and would need to be cached locally, but again, pretty static data that doesn't change during game.

So really, only #2 needs heavy thought on data replication/sync as we are basically replacing how token data syncing works.
techy stuff
Replication is an important feature of a robust database management system. In Derby, you start database replication by using connection URL attributes.

The replication capability of Derby has the following features:

One master, one slave: A replicated database resides in two locations and is managed by two different Derby instances. One of these Derby instances has the master role for this database, and the other has the slave role. Typically, the master and slave run on different nodes, but this is not a requirement. Together, the master and its associated slave represent a replication pair.
Roll-forward shipped log: Replication is based on shipping the Derby transaction log from the master to the slave, and then rolling forward the operations described in the log to the slave database.
Asymmetry: Only the master processes transactions. The slave processes no transactions, not even read operations.
Asynchronicity: Transactions are committed on the master without waiting for the slave. The shipping of the transaction log to the slave is performed regularly, and is completely decoupled from the transaction execution at the master. This may lead to a few lost transactions if the master crashes.
Shared nothing: Apart from the network line, no hardware is assumed to be shared.
Replication granularity: The granularity for replication is exactly one database. However, one Derby instance may have different roles for different databases. For example, one Derby instance may have the following roles, all at the same time:
The master role for one database D1 replicated to one node
The slave role for a database D2 replicated from another node
The normal, non-replicated, role for a database D3
Replication builds on Derby's ability to recover from a crash by starting with a backup and rolling forward Derby's transaction log files. The master sends log records to the slave using a network connection. The slave then writes these log records to its local log and redoes them.

If the master fails, the slave completes the recovery by redoing the log that has not already been processed. The state of the slave after this recovery is close to the state the master had when it crashed. However, some of the last transactions performed on the master may not have been sent to the slave and may therefore not be reflected. When the slave has completed the recovery work, it is transformed into a normal Derby instance that is ready to process transactions. For more details, see Forcing a failover and Replication and security.

Several Derby properties allow you to specify the size of the replication log buffers and the intervals between log shipments, as well as whether replication messages are logged. See the Derby Reference Manual for details.

You can perform replication on a database that runs in either embedded mode or Network Server mode.


@Az, although online database sub is fine for some techies (and I would want to use my own webhost's db), I doubt we're really delivering something for the masses there. Not every gaming group has a tech head. A lot just want to play D&D. So, maybe what, 10% of the users would use this? Now, maybe, a "public" online database for read/only access may be nice for a given "Framework". eg I could see a Spell/Feat/Gun/etc static database maintained for a given FW so ALL clients point to it when a reference data piece is needed. A single point to update and always up to date (provided FW owner maintains it).
-Jamz
____________________
Custom MapTool 1.4.x.x Fork: maptool.nerps.net
Custom TokenTool 2.0 Fork: tokentool.nerps.net
More information here: MapTool Nerps! Fork

User avatar
Azhrei
Site Admin
Posts: 12086
Joined: Mon Jun 12, 2006 1:20 pm
Location: Tampa, FL

Re: Preview build for 1.4.somethingorother

Post by Azhrei »

JamzTheMan wrote:1. A more robost "table" (durrr) to store large sets of game data, eg Spell data, feats, gun stats, etc. A subset of quicker more accessed data like combat conditions/modifiers.
I agree. It seems wise to use a db for simple stuff first, especially stuff that is commonly associated with databases in general.

While it might be nice to store char data in a db, the constant updates would be a problem, I think. Much better to load initial data from the db and keep it locally, then push back to the db when the campaign is saved.
JamzTheMan wrote:Now, maybe, a "public" online database for read/only access may be nice for a given "Framework". eg I could see a Spell/Feat/Gun/etc static database maintained for a given FW so ALL clients point to it when a reference data piece is needed. A single point to update and always up to date (provided FW owner maintains it).
That's a great use for this kind of thing! A public database with read-only access for most users that is associated with a particular framework. Some data might be pulled from the db and cached locally, while other data (like spell descriptions) would be on-demand. Under the control of the macros running the framework, of course.

The replication thing... That's a huge can of worms when it comes to network bandwidth. Most replicating databases assume they have fast connections between engine instances and when they don't, bad things happen. If there's going to be replication, then there would need to be some type of connection monitoring (RTT, for example) so that the two systems with the fastest connections are hosting the database. Too much effort.

Now if there's some way to incorporate a bittorrent approach into replicating data across multiple nodes, that would be pretty cool. :)

User avatar
RPTroll
TheBard
Posts: 3159
Joined: Tue Mar 21, 2006 7:26 pm
Location: Austin, Tx
Contact:

Re: Preview build for 1.4.somethingorother

Post by RPTroll »

It appears Derby has the ability to pull in data from an external source for use locally if we were to implement a Derby DB on the MapTool server.
Programming Derby-style table functions
Derby lets you create table functions. Table functions are functions which package up external data to look like Derby tables. The external data can be an XML file, a table in a foreign database, a live data feed--in short, any information source that can be presented as a JDBC ResultSet.

Derby-style table functions let you efficiently import foreign data into Derby tables. Table functions let you join Derby tables with any of the following data sources:

XML-formatted reports and logs
Queries that run in foreign databases
Streaming data from sensors
RSS feeds
See "CREATE FUNCTION statement" in the Derby Reference Manual for the complete syntax needed to declare Derby-style table functions. The following topics provide information on how to write Java methods which wrap foreign data sources inside ResultSets.
ImageImage ImageImageImageImage
Support RPTools by shopping
Image
Image

User avatar
JamzTheMan
Great Wyrm
Posts: 1872
Joined: Mon May 10, 2010 12:59 pm
Location: Chicagoland
Contact:

Re: Preview build for 1.4.somethingorother

Post by JamzTheMan »

@Az, ya, I don't claim to be a network guy and could see implementing any sort of live replication would be a PITA. I think we would be safe with a "update on campaign load" though. We could lock table edits down when a server starts or we allow a table to be marked as "cacheable" where those would prevent edits during game play and get replicated, but if you don't cache the table, it's kept on the server only, mostly to be used to track game play info (xp maybe, global macro's, etc), although given the ability to do that on a lib:token already...

One can only guess what someone like AM and Wolf will do with something until they get their hands on it. That's why I'm a firm believer in the POC. I know what I "thought" I needed for the VBL functions and what we ended up with Wolf's input was much different and better.

@RPTroll, ya, I really like the simplicity of Derby the more I look at it. It's not like we'll be creating huge relational databases with crazy joins and gb of data (at least one would hope not!). Just something a little more flexible than storing JSON's of data on tokens/tables and iterating over them to get what we need.
-Jamz
____________________
Custom MapTool 1.4.x.x Fork: maptool.nerps.net
Custom TokenTool 2.0 Fork: tokentool.nerps.net
More information here: MapTool Nerps! Fork

User avatar
RPTroll
TheBard
Posts: 3159
Joined: Tue Mar 21, 2006 7:26 pm
Location: Austin, Tx
Contact:

Re: Preview build for 1.4.somethingorother

Post by RPTroll »

I'd be happy if it just replaced the current table functions. Anything else is gravy.
ImageImage ImageImageImageImage
Support RPTools by shopping
Image
Image

Craig
Great Wyrm
Posts: 2107
Joined: Sun Jun 22, 2008 7:53 pm
Location: Melbourne, Australia

Re: Preview build for 1.4.somethingorother

Post by Craig »

My thoughts.

While it might be nice to have remote access to a database outside of MapTool I think relying on it as the only solution is not a good way to go.
Campaign files (or directories) no longer become easily share able and its not workable for people who might play face to face with no connection.
It also doesn't just work when you fire things up and there is suddenly extra configuration that needs to be done to run a game.

For built in replication in databases, I dont think that is workable at all. Built in database replication is used to support either either high availability/load balancing
or a remote fail over site, both of these need a fairly static set of nodes and aren't designed to have new nodes added at the drop of a hat, its solving
a completely different problem (in fact several databases wont even allow you to connect to a slave in a master/slave configuration).

I also think we need to put character data in the database as properties on tokens at this point are getting very large, and if you add a database to store more
information like spells then you are just creating more data that will end up in token properties some way or another. From there its just deciding how things are
handled for consistency. So if not talking round trip to server, do we
1) hold everything in memory that is frequently updated and only update one save (I don't even see the use of have a database in this scenario)
2) Send all character/token data for a map every time a map is loaded and hold that in memory (not much better)
3) Have a read only cache of the data on the client side (changes all get sent to server and server sends back change to client no change is made locally until server says to).
4) Cache that is something more similar to what happens today, changes are made locally and that change is sent to server which ends up sending it back too.

The question then is how do we handle replicating the data on new connection. At the moment MapTool just sends everything down when connecting, this was fine for when
tokens where not holding that much, and still works now, but if we are going to add more and more data this might be a bit of a brain dead way of doing it. A less brain dead
-- but still some what -- brute force method would be to keep a time stampin every table with "last updated time", then when a client connects the client can send it "here is
a list of my tables and the last time they were updated" (assuming a persistent cache) and MapTool can then gather all the data from any tables that were updated later than
that and send them again (this would require sending whole rows again, which is more than what actually changed but any finer grain solution would require keeping a
transaction log which is probably over kill). Like wise every 5/10 what ever minutes the client could send a similar message letting the server know of where it is up to so the
server can then make sure every one is on the same page. Getting out of sync is not something that should be a rare occurrence but hey the code would already be there for the
initial synchronisation on connection so why not make use of it to make sure things are running smoothly. The last update should be maintained by triggers so even updates done
directly to the database (when MT is not running of course) will be part of the change history sent to clients when everything is started up next time.

I am not a big fan of caching things a lot of the things in memory on top of the database (its going to need to happen but the less the better) as all the decent databases will have
good caching anyway. Oh also no ORM, for something like MapTool and the data people will be storing in it ORM just causes more problems than it solves. (Actually I am not sure that
it would solve any problem other than "I don't want to learn SQL" and would certainly create a few).

Post Reply

Return to “MapTool 1.4”