All of the orientation and location data is currently available via our vision interface (Community Core Vision 1.4 or better) and can create events which would need to be handled. My initial plan is to leverage heavily from MT4j (MultiTouch 4 Java) when adding these elements to MapTool.
I would love to get an official sanction to add MT support to MapTool.
Should we spawn a separate thread for MapTool MT discussions?
I have zero experience with MT4j but I'm certainly willing to help in integrating that library. Feel free to start a thread in the Developer Notes section to discuss technical details. One of the first goals for 1.4 is to better modularize the existing code and that could serve you well as the input handling would be separated into a set of APIs for input event handling. Start your other thread and we can get into it in more detail.
I looked at MT4J for this, myself, but found that their code was heavily in-bed with Processing, and getting Processing to work alongside maptools proved to be cludgy at best, and beyond my level of patience to continue.
Their event processing and Gesture Recognition code might port across well enough, though.
In my code (which stalled, unfortunately, due to real life) I created a new version of the maptools "Hand Tool" for map interaction. That meant I didn't have to deal with simulating mouse events, I just consumed the TUIO events directly.
It looks like you guys are further along than I was though.