Too Many Darn Lists

Discuss macro implementations, ask for macro help (to share your creations, see User Creations, probably either Campaign Frameworks or Drop-in Resources).

Moderators: dorpond, trevor, Azhrei, giliath, jay, Mr.Ice

prestidigitator
Dragon
Posts: 317
Joined: Fri Apr 23, 2010 8:17 pm

Re: Too Many Darn Lists

Post by prestidigitator »

I wonder if it would be easier to share such lists if they were embedded in token properties. Maybe you could have one "Lib:ListRegistry" and then a library for each list (e.g. "Lib:AnimalList"), which would call up the list registry and add its own name to the set of candidates. If the actual data were stored as a JSON array, it'd be easy to format it outside of MapTool, paste it as a property value in a template token, and save the token for easy sharing. The registry would have the macro that allows the user to select one of the lists, then it could call a macro on the chosen list to complete the work.

Maybe that's getting a little complex. It'd be a fun little thing to implement, however. ;)
"He knows not how to know who knows not also how to un-know." --Sir Richard Burton

User avatar
wolph42
Winter Wolph
Posts: 9999
Joined: Fri Mar 20, 2009 5:40 am
Location: Netherlands
Contact:

Re: Too Many Darn Lists

Post by wolph42 »

prestidigitator wrote:I wonder if it would be easier to share such lists if they were embedded in token properties. Maybe you could have one "Lib:ListRegistry" and then a library for each list (e.g. "Lib:AnimalList"), which would call up the list registry and add its own name to the set of candidates. If the actual data were stored as a JSON array, it'd be easy to format it outside of MapTool, paste it as a property value in a template token, and save the token for easy sharing. The registry would have the macro that allows the user to select one of the lists, then it could call a macro on the chosen list to complete the work.

Maybe that's getting a little complex. It'd be a fun little thing to implement, however. ;)
Actually that's roughly how I currently keep my 'lists' (more tables actually) stored in my FW: in lib:token in json format. The main reason for this is that all of my tables have more than one column, which excludes tables by default.

In addition to what you're suggesting, when using json you can also:
- create lists with more than two column, very handy for a name generator which usually consists out of 2 or 3 columns
- create a parser (form with 'textblock' ) in which you can copy paste an excel table that parses it into a json object
- create a function (e.g.) jTable( ['tablename'], optional: ['rownumber'; default =random], optional: ['columnnumber' or 'columnname'; default=2],optional:['pickOne' or 'pickAllRandom', 'pickAll'; default = pickOne], optional:['delimiter', default=none], optional: ['lib:Name'; default='lib:Tables']
'pickOne' = choose from one column
'pickAll' = pick from all columns the given rownumber
'pickAllRandom' = pick from all columns a random item
'delimiter' = is the delimiter for the output of pickAll(random).

but lets first start with just what we're currently working with, if it starts taking off, I'm willing to build something like above.

prestidigitator
Dragon
Posts: 317
Joined: Fri Apr 23, 2010 8:17 pm

Re: Too Many Darn Lists

Post by prestidigitator »

Nice. :)
"He knows not how to know who knows not also how to un-know." --Sir Richard Burton

User avatar
wolph42
Winter Wolph
Posts: 9999
Joined: Fri Mar 20, 2009 5:40 am
Location: Netherlands
Contact:

Re: Too Many Darn Lists

Post by wolph42 »

attached an update of the earlier tablelists campaign. it now sorts the lists and doesn't show the lst anymore.
here, again, the code:

Code: Select all

<!--lists-->

[h:campProps   = getInfo("campaign")]
[h:jsonTables  = json.get(campProps, "tables")]
[h:allTables   = json.toList(jsonTables)]
[h:allTables   = json.sort(jsonTables)]

[h:listTables = ""]
[h:listTablesNames = ""]
[h, foreach(item, allTables), if(startsWith(item, "lst")), CODE:{
    [listTablesNames  = listAppend(listTablesNames,substring(item,3))]
    [listTables       = listAppend(listTables,item)]
}]


[h:status=input("chosenList|"+listTablesNames+"|Choose a list|RADIO|SELECT=0")]
[h:abort(status)]

[h:chosenList = listGet(listTables, chosenList)]

[r:table(chosenList)] 
Attachments
tablesList b2.cmpgn
(87.21 KiB) Downloaded 59 times

User avatar
biodude
Dragon
Posts: 444
Joined: Sun Jun 15, 2008 2:40 pm
Location: Montréal, QC

Re: Too Many Darn Lists

Post by biodude »

So, are people interested in storing tabular data on a token, or organizing lists in MapTool 'Tables'? The major differences to me being:
> MapTool Tables are very limited in structure (2 columns, one of which is an index), but it can store images.
> Tabular data on a token can have more diverse structure, but could not store image data.

I've been thinking of building a drop-in tool for a while, to handle tabular data stored on a token. I haven't done a lot of coding yet, mostly a lot of planning, to make development easier. However, some of the comments in this thread sound dangerously close to what I was thinking of, so I thought I'd lay it out for consideration:

The original concept was intended more for displaying lists in a frame for quick reference, but would include editing UI & retrieval functions.

Essentially, each table/list would be stored as a 'table object' (we could call them something like 'token-tables' or 'data-frames' to distinguish them from MapTool tables. A table object would actually consist of three elements:
  • Name & description / caption (stored in separate properties or together as a small json object)
  • Structure: a json object for each column would contain details on table structure, including column names , data type, and formatting options for display.
  • Data: each row is an array, combined into an array of rows.
My plan was to store each of these components in a separate token property, prefaced by an identifying label ('ttbl.' or something) + a shared ID (not the table name, but something immutable): e.g. 'ttbl.001:Name' Wiki: getMatchingProperties() can easily be used to extract all of them at once.

There would be obvious 'methods' (UDFs) to handle common tasks (anything starting with a period would be prefixed by a label depending on what these 'table objects' are called, e.g. 'ttbl', 'dframe', etc.):
UI
  • .show(): displays the tables with a 'management' interface to reorder,add/edit/delete/duplicate tables.
  • .display()[:/b] format table data into html for display in a frame, according to settings (no management interface).
  • .manage(): handle management commands for all tables on a token (reorder, add/delete/duplicate, hide/show, etc.)
  • .edit(): edit a single table in an html form-type interface.
  • .import(): paste text data into text box, to be parsed & converted into table data (single row or column also an option).
  • .export(): display data as plain text (csv, delimited or fixed width), for export to a spreadsheet, for example.
  • .transfer(): send the data to a destination (selected) token.

Data handling
  • fetch/get.(): extract a table object (raw data form), specified by ID, from a specified token.
  • .get(): retrieve specific row or column data, depending on arguments submitted
  • .set(): alter specific row or column data, depending on arguments submitted
  • .parse/textToTable(): parses text passed in and extracts individual columns, rows, or entire table data. Can be delimited (non-numeric delimiter argument) or fixed-width (numeric delim argument specifying number of characters in each column).


I hadn't considered the lookup functions mentioned by wolph42, but that seems a logical, and relatively simple thing to add, as a variation on .get().

1. Would people be interested in such a thing, and would it satisfy your criteria?
2. If so, would people be willing to collaborate on this so I don't have to do it all myself ;)?
Can you suggest a good name for such a construct, or better names for the functions?
"The trouble with communicating is believing you have achieved it"
[ d20 StatBlock Importer ] [ Batch Edit Macros ] [ Canned Speech UI ] [ Lib: Math ]

neofax
Great Wyrm
Posts: 1694
Joined: Tue May 26, 2009 8:51 pm
Location: Philadelphia, PA
Contact:

Re: Too Many Darn Lists

Post by neofax »

MapTool tables can hold JSON's as well as tokens and as pointed out, they have pics associated.

User avatar
wolph42
Winter Wolph
Posts: 9999
Joined: Fri Mar 20, 2009 5:40 am
Location: Netherlands
Contact:

Re: Too Many Darn Lists

Post by wolph42 »

neofax wrote:MapTool tables can hold JSON's as well as tokens and as pointed out, they have pics associated.
and as also has been pointed out: its read only AND moreover its insanely cumbersome to create a list even with my excel tool.
biodude wrote:So, are people interested in storing tabular data on a token, or organizing lists in MapTool 'Tables'? The major differences to me being:
That would be ' storing tabular data on a token'.
I've already taken the first steps to see how difficult it is and how 'good' it works. Conclusion:
1. Not very difficult (though did spend a few hours thinking it through)
2. Very very very slow

Attached is my test setup. It contains a campaign file with a macro to run an input frame, there you can paste a table straight out of excel, this it parses into a json object and then randomly retrieves the values of one row. There is no storing onto a token yet (which in all likelyhood will make it even more slow).
Admittedly this is very quick n dirty code and can be optimized. I doubt however if that will have enough impact to make it usable.

Anyway:
1. run campaign (in MT70)
2. select wolf
3. run ListBuilderFrame
4. open excel file
5. copy first 50 !! (not all) rows of the weapon table
6. paste them into the textbox
7. press save
8. wait 1:38 seconds (thats on my not so very fast pc) (1:35 to parse into json, 3s to retrieve data and show frame)
9. a frame will pop up with some data and a randomly selected row (extracted from the json).
Note: if you check out the parser "listParserJson" code, I've mixed up the terms 'rows' and 'columns' (i use them both correctly and incorrectly) which makes it a bit incomprehensible

@biodude: As for cooperating with your venture... reading your post I believe your MT-Script-Fu is way higher (and organised) then mine. Next to that I want to focus on the 'list' parser and not on a full scale table platform as you described.
So if you think you can live with my retarded QnD code style and would like to focus on getting a listparser (or whatever it will be named), then yes I think a coop is possible.

Edit: did some testing, time to parse textblock into a json object (every row counts 25 columns):
25 rows (625 items) = 30 seconds
50 rows (1250 items) = 95 seconds
100 rows (2500 items) = 315 seconds

Ive added an graph to the excel with the test results, nicely showing an exponential.
Attachments
TableParser Test setup.zip
a dabble
(121.63 KiB) Downloaded 61 times

User avatar
biodude
Dragon
Posts: 444
Joined: Sun Jun 15, 2008 2:40 pm
Location: Montréal, QC

Re: Too Many Darn Lists

Post by biodude »

wolph42 wrote:
neofax wrote:MapTool tables can hold JSON's as well as tokens and as pointed out, they have pics associated.
and as also has been pointed out: its read only AND moreover its insanely cumbersome to create a list even with my excel tool.
I agree: the problem is not the capacity or capability of MapTool Tables, but rather the UI. Furthermore, the tool I was thinking of would be useful to store token-specific tables (memorized spells, list of contacts, inventory, etc.): 'Quick-n-dirty' stuff that didn't have to be integrated into a larger framework (but could be...).
wolph42 wrote:@biodude: As for cooperating with your venture... reading your post I believe your MT-Script-Fu is way higher (and organised) then mine. Next to that I want to focus on the 'list' parser and not on a full scale table platform as you described.
So if you think you can live with my retarded QnD code style and would like to focus on getting a listparser (or whatever it will be named), then yes I think a coop is possible.
In that case, feel free to focus on the list parser and share your results. That is definitely something that I could use and incorporate into my project, and would save me the effort ;-) I'm not terribly good at performance, anyway, although regex parsing can be surprisingly fast in my experience. I'll stay tuned and watch as things develop here.
"The trouble with communicating is believing you have achieved it"
[ d20 StatBlock Importer ] [ Batch Edit Macros ] [ Canned Speech UI ] [ Lib: Math ]

User avatar
wolph42
Winter Wolph
Posts: 9999
Joined: Fri Mar 20, 2009 5:40 am
Location: Netherlands
Contact:

Re: Too Many Darn Lists

Post by wolph42 »

just had a brainfart :idea: bout the efficiency.

Tthe time results got me thinking. The time to proces grows exponentially with the amount of items to process.
In my test setup I made use of json.pset (I forgot who the brilliant person was that created them, but I am gratefully using them). this does mean however that every single item is added to an ever growing json object which is most likely the reason for the exponential time growth.

the current structure that is build consist out of
Main json object= {1stRow{1stColumnHeader=1stcolumnValue, 2ndCoumnheader = ...}, 2ndRow{...},...}
Which means that every jsonRow is actually build in the [foreach(item, line, tab)] routine. So it guess it will be waaay more efficient to build a complete jsonRow and THEN add the complete row to the main json object.
This way a 'very large json operation' is only done 50 times (in case I submit 50 rows table). Once every row in stead of once every item.

On the other hand if you have a table of say 3 columns x 500 rows, it will be much slower (though still quite a bit speedier 500 vs 1500).

Im curious if anyone besides me understands what I just farted on the forum... :?

User avatar
biodude
Dragon
Posts: 444
Joined: Sun Jun 15, 2008 2:40 pm
Location: Montréal, QC

Re: Too Many Darn Lists

Post by biodude »

wolph42 wrote:
biodude wrote:So, are people interested in storing tabular data on a token, or organizing lists in MapTool 'Tables'? The major differences to me being:
That would be ' storing tabular data on a token'.
I've already taken the first steps to see how difficult it is and how 'good' it works. Conclusion:
1. Not very difficult (though did spend a few hours thinking it through)
2. Very very very slow
I suspect the speed problem has to do with relying too much on MTScript for things. I think it would be way faster to let the regex searching isolate each item, rather than searching within the string in a FOR loop: or better yet, just treat each line a string list and let Wiki: json.fromList() handle all the parsing. The more built-in functions you can use, the faster it will be.
The loops will still be necessary, but all the extraction can be done by faster built-in functions. This is not tested, so I'm not sure if this is faster than a FOREACH loop specifying a delimiter each time.
Furthermore, your code could probably be more efficient if you didn't evaluate for the first row in EVERY loop: just do this part once, then go through each subsequent loop for the rest: it makes the loops shorter.
[ ninja'd: and your brain fart (implemented below). ]

For example:
ListParsing

Code: Select all

<!-- List Parsing XPTL -->
[h: args  = macro.args]
[h: Title = json.get(args, "Title")]
[h: text  = json.get(args, "TextBox")]

[delim = "%09"]
[lb = "%0A"]

[ nLine = 0 ]
[jTable = "{}"]
[row=""]

[ line.f = strfind( text , "(.*?)(\\n)" )]    <!-- isolate individual lines: or use "(.*?)(%0A)" if encoded. -->
[ nLine = getFindCount( line.f )]

<!-- first line only: might be more efficient to pull this out and only do it once 
    (rather than having to evaluate and skip EVERY OTHER TIME THROUGH THE LOOP) -->
    [ line.txt = getGroup( line.f , 1 , 1 )]    <!-- get the 1st line (omitting line break) -->
    [ headerList = json.fromList( line.txt , delim )]    <!-- "%09" if encoded, decode( "%09" ) otherwise, or specified by argument? -->
    [ tableName = json.get( headerList, 0 )]

<!-- all subsequent rows -->
[h, FOR( l, 2 , nLine ), CODE: {
    [ line.txt = getGroup( line.f , l , 1 )]    <!-- get the 'lth' line (omitting line break) -->
    [ tab.f = strfind( line.txt , "(.*?)(\\t)" )]    <!-- isolate individual items between tabs: or use "(.*?)(%09)" if encoded. -->
    [nTab = getFindCount( tab.f )]    <!-- this might only really need to be done with the first line -->
    [ jRow = "{}" ]
    [ FOR( i , 1 , nTab ), CODE: {
        [ item = getGroup( line.txt , i , 1 )]    <!-- get the 'ith' item (omitting delimiter) -->
        [ column = json.get( headerList , i )]    <!-- get the 'ith' column name -->

        <!--name of current row / assemble row object-->
        [if( i==0 ): 
            row = item ;
            jRow = json.set( jRow , column, item )
        ]
        <!--
            if(nline && ntab): jTable = json.pset(jTable,row+"/"+column,item)
            - convenient, but adds yet more nested loops - inefficient.
        -->
    }]
    <!--fill the json object-->
    [ jTable = json.set( jTable , row , jRow )]
}]
[ macro.return = json.set( "" , tableName , jTable )]
 
Or, better yet:
ListParsing2

Code: Select all

<!-- List Parsing XPTL -->
[h: args  = macro.args]
[h: Title = json.get(args, "Title")]
[h: text  = json.get(args, "TextBox")]

[delim = "%09"]
[lb = "%0A"]

[ nLine = 0 ]
[jTable = "{}"]
[row=""]

[ line.f = strfind( text , "(.*?)(\\n)" )]    <!-- isolate individual lines: or use "(.*?)(%0A)" if encoded. -->
[ nLine = getFindCount( line.f )]

<!-- first line only: might be more efficient to pull this out and only do it once 
    (rather than having to evaluate and skip EVERY OTHER TIME THROUGH THE LOOP) -->
    [ line.txt = getGroup( line.f , 1 , 1 )]    <!-- get the 1st line (omitting line break) -->
    [ headerList = json.fromList( line.txt , delim )]    <!-- "%09" if encoded, decode( "%09" ) otherwise, or specified by argument? -->
    [ tableName = json.get( headerList, 0 )]
    [ nTab = json.length( headerList )]    <!-- this might only really need to be done with the first line -->

<!-- all subsequent rows -->
[h, FOR( l, 2 , nLine ), CODE: {
    [ line.txt = getGroup( line.f , l , 1 )]    <!-- get the 'lth' line (omitting line break) -->
    [ row = json.fromList( line.txt , delim )]    <!-- "%09" if encoded, decode( "%09" ) otherwise, or specified by argument? -->
    [ jRow = "{}" ]
    [ FOR( i , 1 , nTab ), CODE: {
        [ item   = json.get( row , i )]    <!-- get the 'ith' item (omitting delimiter) -->
        [ column = json.get( headerList , i )]    <!-- get the 'ith' column name -->

        <!--name of current row / assemble row object-->
        [if( i==0 ): 
            rowName = item ;
            jRow = json.set( jRow , column, item )
        ]
    }]
    <!--fill the json object-->
    [ jTable = json.set( jTable , rowName , jRow )]
}]
[ macro.return = json.set( "" , tableName , jTable )] 
"The trouble with communicating is believing you have achieved it"
[ d20 StatBlock Importer ] [ Batch Edit Macros ] [ Canned Speech UI ] [ Lib: Math ]

User avatar
wolph42
Winter Wolph
Posts: 9999
Joined: Fri Mar 20, 2009 5:40 am
Location: Netherlands
Contact:

Re: Too Many Darn Lists

Post by wolph42 »

thats even better.

and also ninja'd...: In the mean time I executed my brainfart and the lot went from 5,5 minutes to process 100x25 to 21 seconds. Moreover the increase became more linear. I concluded that the exponential growth started (very gradually) after 2500 items.

It took a total of 65 seconds to proces the whole table. The question now is, is that acceptable? (its a rather huge table, but 1 minute is also quite a long time)

What you came up with is exactly my earlier point about the fu.

attached the results.

with your ninja action however especially the list2json, of which I simply didn't think of, stuff can be again quite faster.
Attachments
tablesList b4.zip
(112.24 KiB) Downloaded 63 times

User avatar
biodude
Dragon
Posts: 444
Joined: Sun Jun 15, 2008 2:40 pm
Location: Montréal, QC

Re: Too Many Darn Lists

Post by biodude »

wolph42 wrote: It took a total of 65 seconds to proces the whole table. The question now is, is that acceptable? (its a rather huge table, but 1 minute is also quite a long time)
My attitude towards speed of these things has to do with context: I don't mind if an import action takes a few seconds or even a minute, if it saves me a few minutes (or hours) of manual typing. I expect an Attack or HP macro to be near-instantaneous, however ;)
If you expect the macro to take more than 5 seconds to execute, I think a standard disclaimer would suffice, warning the user that it might take a while. Especially since there is no way to give feedback to the user indicating that the process is moving (slowly), without interrupting the process (i.e. input() function ). So, no progress bars :(

Glad I could help. I guess I have learned a few things in the past year :mrgreen:!

Any ideas regarding the relative speeds of FOREACH( var , list , "" , delim ) vs. FOR( var , start, end )?
I seem to remember Craig mentioning that FOREACH is generally faster than FOR or COUNT, but I don't know about the delimiter part.
"The trouble with communicating is believing you have achieved it"
[ d20 StatBlock Importer ] [ Batch Edit Macros ] [ Canned Speech UI ] [ Lib: Math ]

User avatar
wolph42
Winter Wolph
Posts: 9999
Joined: Fri Mar 20, 2009 5:40 am
Location: Netherlands
Contact:

Re: Too Many Darn Lists

Post by wolph42 »

ok i tested your piece of code. Needed some debugging, most interesting and beyond my comprehension was the fact that i had to change:

Code: Select all

<!-- all subsequent rows -->
[h, FOR( l, 2 , nLine ), CODE: {
into

Code: Select all

<!-- all subsequent rows -->
[h, FOR( l, 2 , nLine+1 ), CODE: {
I did check the value, but as soon as it reached 4 it jumped the loop in stead of running it the last time.

This is what i made of it:

Code: Select all

<!-- List Parsing XPTL -->
[h: args  = macro.args]
[h: Title = json.get(args, "Title")]
[h: text  = json.get(args, "TextBox")]

[delim = "%09"]
[deDelim = decode(delim)]
[lb = "%0A"]

[ nLine = 0 ]
[jTable = "{}"]
[row=""]
[rowName=""]

[ line.f = strfind( text , "(.*?)(\\n)" )]    <!-- isolate individual lines: or use "(.*?)(%0A)" if encoded. -->
[ nLine = getFindCount( line.f )]

<!-- first line only: might be more efficient to pull this out and only do it once 
    (rather than having to evaluate and skip EVERY OTHER TIME THROUGH THE LOOP) -->
    [ line.txt = getGroup( line.f , 1 , 1 )]    <!-- get the 1st line (omitting line break) -->
    [ headerList = json.fromList( line.txt , deDelim )]    <!-- "%09" if encoded, decode( "%09" ) otherwise, or specified by argument? -->
    [ tableName = json.get( headerList, 0 )]
    [ nTab = json.length( headerList )]    <!-- this might only really need to be done with the first line -->

<!-- all subsequent rows -->

[h, FOR( l, 2 , nLine+1 ), CODE: {
    [ line.txt = getGroup( line.f , l , 1 )]    <!-- get the 'lth' line (omitting line break) -->
    [ row = json.fromList( line.txt , deDelim )]    <!-- "%09" if encoded, decode( "%09" ) otherwise, or specified by argument? -->
    [ jRow = "{}" ]
    [ FOR( i , 0 , nTab), CODE: {
        [ item   = json.get( row , i )]    <!-- get the 'ith' item (omitting delimiter) -->
        [ column = json.get( headerList , i )]    <!-- get the 'ith' column name -->

        <!--name of current row / assemble row object-->
        [if( i==0 ): rowName = item ; jRow = json.set( jRow , column, item )]
    }]
    <!--fill the json object-->
    [ jTable = json.set( jTable , rowName , jRow )]
}]

 
it beat my routine with 23 seconds (42 seconds vs 65 for my routine to parse the whole table).

User avatar
biodude
Dragon
Posts: 444
Joined: Sun Jun 15, 2008 2:40 pm
Location: Montréal, QC

Re: Too Many Darn Lists

Post by biodude »

wolph42 wrote:ok i tested your piece of code. Needed some debugging, most interesting and beyond my comprehension was the fact that i had to change:

Code: Select all

<!-- all subsequent rows -->
[h, FOR( l, 2 , nLine ), CODE: {
into

Code: Select all

<!-- all subsequent rows -->
[h, FOR( l, 2 , nLine+1 ), CODE: {
I did check the value, but as soon as it reached 4 it jumped the loop in stead of running it the last time.
Yeah, I keep forgetting that's how FOR works nowadays: it stops one interval short of the 'end' value. Convenient for intervals like 0 to length(var), but less intuitive otherwise. Sorry about the debugging - that's what I get for not testing ;)

[quote="wolph42"it beat my routine with 23 seconds (42 seconds vs 65 for my routine to parse the whole table).[/quote]
Hunh. So, is the FOR the only major difference? That implies that FOREACH() with a delimiter is slower than pre-parsing with strfind(), and using FOR() instead and pulling out results with getGroup() within the loop ... Good to know, thanks for checking that out.
"The trouble with communicating is believing you have achieved it"
[ d20 StatBlock Importer ] [ Batch Edit Macros ] [ Canned Speech UI ] [ Lib: Math ]

neofax
Great Wyrm
Posts: 1694
Joined: Tue May 26, 2009 8:51 pm
Location: Philadelphia, PA
Contact:

Re: Too Many Darn Lists

Post by neofax »

wolph42 wrote:
neofax wrote:MapTool tables can hold JSON's as well as tokens and as pointed out, they have pics associated.
and as also has been pointed out: its read only AND moreover its insanely cumbersome to create a list even with my excel tool.
I don't foresee someone changing tavern names that often so read only point is kinda moot. Also, I don't see where making a JSON for a table is harder than for a token.

Post Reply

Return to “Macros”